00:00:00.001 Started by upstream project "autotest-per-patch" build number 132393 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.005 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.006 The recommended git tool is: git 00:00:00.006 using credential 00000000-0000-0000-0000-000000000002 00:00:00.008 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.057 Fetching changes from the remote Git repository 00:00:00.059 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.108 Using shallow fetch with depth 1 00:00:00.108 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.108 > git --version # timeout=10 00:00:00.160 > git --version # 'git version 2.39.2' 00:00:00.160 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.195 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.195 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:06.014 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:06.026 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:06.039 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:06.039 > git config core.sparsecheckout # timeout=10 00:00:06.051 > git read-tree -mu HEAD # timeout=10 00:00:06.069 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:06.091 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:06.092 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:06.181 [Pipeline] Start of Pipeline 00:00:06.193 [Pipeline] library 00:00:06.194 Loading library shm_lib@master 00:00:06.194 Library shm_lib@master is cached. Copying from home. 00:00:06.208 [Pipeline] node 00:00:21.210 Still waiting to schedule task 00:00:21.211 Waiting for next available executor on ‘vagrant-vm-host’ 00:03:39.908 Running on VM-host-WFP1 in /var/jenkins/workspace/raid-vg-autotest 00:03:39.909 [Pipeline] { 00:03:39.917 [Pipeline] catchError 00:03:39.918 [Pipeline] { 00:03:39.928 [Pipeline] wrap 00:03:39.936 [Pipeline] { 00:03:39.943 [Pipeline] stage 00:03:39.945 [Pipeline] { (Prologue) 00:03:39.963 [Pipeline] echo 00:03:39.964 Node: VM-host-WFP1 00:03:39.969 [Pipeline] cleanWs 00:03:39.978 [WS-CLEANUP] Deleting project workspace... 00:03:39.978 [WS-CLEANUP] Deferred wipeout is used... 00:03:39.984 [WS-CLEANUP] done 00:03:40.178 [Pipeline] setCustomBuildProperty 00:03:40.275 [Pipeline] httpRequest 00:03:40.594 [Pipeline] echo 00:03:40.596 Sorcerer 10.211.164.20 is alive 00:03:40.605 [Pipeline] retry 00:03:40.607 [Pipeline] { 00:03:40.622 [Pipeline] httpRequest 00:03:40.627 HttpMethod: GET 00:03:40.628 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:03:40.628 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:03:40.635 Response Code: HTTP/1.1 200 OK 00:03:40.636 Success: Status code 200 is in the accepted range: 200,404 00:03:40.636 Saving response body to /var/jenkins/workspace/raid-vg-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:04:01.186 [Pipeline] } 00:04:01.207 [Pipeline] // retry 00:04:01.216 [Pipeline] sh 00:04:01.508 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:04:01.527 [Pipeline] httpRequest 00:04:02.004 [Pipeline] echo 00:04:02.006 Sorcerer 10.211.164.20 is alive 00:04:02.016 [Pipeline] retry 00:04:02.018 [Pipeline] { 00:04:02.032 [Pipeline] httpRequest 00:04:02.036 HttpMethod: GET 00:04:02.037 URL: http://10.211.164.20/packages/spdk_82b85d9ca4865badd808b645e20c6627f4e8e859.tar.gz 00:04:02.038 Sending request to url: http://10.211.164.20/packages/spdk_82b85d9ca4865badd808b645e20c6627f4e8e859.tar.gz 00:04:02.041 Response Code: HTTP/1.1 200 OK 00:04:02.042 Success: Status code 200 is in the accepted range: 200,404 00:04:02.043 Saving response body to /var/jenkins/workspace/raid-vg-autotest/spdk_82b85d9ca4865badd808b645e20c6627f4e8e859.tar.gz 00:05:52.155 [Pipeline] } 00:05:52.172 [Pipeline] // retry 00:05:52.188 [Pipeline] sh 00:05:52.472 + tar --no-same-owner -xf spdk_82b85d9ca4865badd808b645e20c6627f4e8e859.tar.gz 00:05:55.039 [Pipeline] sh 00:05:55.330 + git -C spdk log --oneline -n5 00:05:55.330 82b85d9ca bdev/malloc: malloc_done() uses switch-case for clean up 00:05:55.330 0728de5b0 nvmf: Add hide_metadata option to nvmf_subsystem_add_ns 00:05:55.330 349af566b nvmf: Get metadata config by not bdev but bdev_desc 00:05:55.330 1981e6eec bdevperf: Add hide_metadata option 00:05:55.330 66a383faf bdevperf: Get metadata config by not bdev but bdev_desc 00:05:55.346 [Pipeline] writeFile 00:05:55.361 [Pipeline] sh 00:05:55.638 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:05:55.648 [Pipeline] sh 00:05:55.925 + cat autorun-spdk.conf 00:05:55.925 SPDK_RUN_FUNCTIONAL_TEST=1 00:05:55.925 SPDK_RUN_ASAN=1 00:05:55.925 SPDK_RUN_UBSAN=1 00:05:55.925 SPDK_TEST_RAID=1 00:05:55.925 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:05:55.931 RUN_NIGHTLY=0 00:05:55.933 [Pipeline] } 00:05:55.946 [Pipeline] // stage 00:05:55.963 [Pipeline] stage 00:05:55.965 [Pipeline] { (Run VM) 00:05:55.978 [Pipeline] sh 00:05:56.257 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:05:56.257 + echo 'Start stage prepare_nvme.sh' 00:05:56.257 Start stage prepare_nvme.sh 00:05:56.257 + [[ -n 4 ]] 00:05:56.257 + disk_prefix=ex4 00:05:56.257 + [[ -n /var/jenkins/workspace/raid-vg-autotest ]] 00:05:56.257 + [[ -e /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf ]] 00:05:56.257 + source /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf 00:05:56.257 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:05:56.257 ++ SPDK_RUN_ASAN=1 00:05:56.257 ++ SPDK_RUN_UBSAN=1 00:05:56.257 ++ SPDK_TEST_RAID=1 00:05:56.257 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:05:56.257 ++ RUN_NIGHTLY=0 00:05:56.257 + cd /var/jenkins/workspace/raid-vg-autotest 00:05:56.257 + nvme_files=() 00:05:56.257 + declare -A nvme_files 00:05:56.257 + backend_dir=/var/lib/libvirt/images/backends 00:05:56.257 + nvme_files['nvme.img']=5G 00:05:56.257 + nvme_files['nvme-cmb.img']=5G 00:05:56.257 + nvme_files['nvme-multi0.img']=4G 00:05:56.257 + nvme_files['nvme-multi1.img']=4G 00:05:56.257 + nvme_files['nvme-multi2.img']=4G 00:05:56.257 + nvme_files['nvme-openstack.img']=8G 00:05:56.257 + nvme_files['nvme-zns.img']=5G 00:05:56.257 + (( SPDK_TEST_NVME_PMR == 1 )) 00:05:56.257 + (( SPDK_TEST_FTL == 1 )) 00:05:56.257 + (( SPDK_TEST_NVME_FDP == 1 )) 00:05:56.257 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:05:56.257 + for nvme in "${!nvme_files[@]}" 00:05:56.257 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-multi2.img -s 4G 00:05:56.257 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:05:56.257 + for nvme in "${!nvme_files[@]}" 00:05:56.257 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-cmb.img -s 5G 00:05:56.257 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:05:56.257 + for nvme in "${!nvme_files[@]}" 00:05:56.257 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-openstack.img -s 8G 00:05:56.257 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:05:56.257 + for nvme in "${!nvme_files[@]}" 00:05:56.257 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-zns.img -s 5G 00:05:56.257 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:05:56.257 + for nvme in "${!nvme_files[@]}" 00:05:56.257 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-multi1.img -s 4G 00:05:56.257 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:05:56.257 + for nvme in "${!nvme_files[@]}" 00:05:56.257 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-multi0.img -s 4G 00:05:56.515 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:05:56.515 + for nvme in "${!nvme_files[@]}" 00:05:56.515 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme.img -s 5G 00:05:56.515 Formatting '/var/lib/libvirt/images/backends/ex4-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:05:56.515 ++ sudo grep -rl ex4-nvme.img /etc/libvirt/qemu 00:05:56.515 + echo 'End stage prepare_nvme.sh' 00:05:56.515 End stage prepare_nvme.sh 00:05:56.528 [Pipeline] sh 00:05:56.811 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:05:56.811 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex4-nvme.img -b /var/lib/libvirt/images/backends/ex4-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex4-nvme-multi1.img:/var/lib/libvirt/images/backends/ex4-nvme-multi2.img -H -a -v -f fedora39 00:05:56.811 00:05:56.811 DIR=/var/jenkins/workspace/raid-vg-autotest/spdk/scripts/vagrant 00:05:56.811 SPDK_DIR=/var/jenkins/workspace/raid-vg-autotest/spdk 00:05:56.811 VAGRANT_TARGET=/var/jenkins/workspace/raid-vg-autotest 00:05:56.811 HELP=0 00:05:56.811 DRY_RUN=0 00:05:56.811 NVME_FILE=/var/lib/libvirt/images/backends/ex4-nvme.img,/var/lib/libvirt/images/backends/ex4-nvme-multi0.img, 00:05:56.811 NVME_DISKS_TYPE=nvme,nvme, 00:05:56.811 NVME_AUTO_CREATE=0 00:05:56.811 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex4-nvme-multi1.img:/var/lib/libvirt/images/backends/ex4-nvme-multi2.img, 00:05:56.811 NVME_CMB=,, 00:05:56.811 NVME_PMR=,, 00:05:56.811 NVME_ZNS=,, 00:05:56.811 NVME_MS=,, 00:05:56.811 NVME_FDP=,, 00:05:56.812 SPDK_VAGRANT_DISTRO=fedora39 00:05:56.812 SPDK_VAGRANT_VMCPU=10 00:05:56.812 SPDK_VAGRANT_VMRAM=12288 00:05:56.812 SPDK_VAGRANT_PROVIDER=libvirt 00:05:56.812 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:05:56.812 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:05:56.812 SPDK_OPENSTACK_NETWORK=0 00:05:56.812 VAGRANT_PACKAGE_BOX=0 00:05:56.812 VAGRANTFILE=/var/jenkins/workspace/raid-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:05:56.812 FORCE_DISTRO=true 00:05:56.812 VAGRANT_BOX_VERSION= 00:05:56.812 EXTRA_VAGRANTFILES= 00:05:56.812 NIC_MODEL=e1000 00:05:56.812 00:05:56.812 mkdir: created directory '/var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt' 00:05:56.812 /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt /var/jenkins/workspace/raid-vg-autotest 00:05:59.344 Bringing machine 'default' up with 'libvirt' provider... 00:06:00.717 ==> default: Creating image (snapshot of base box volume). 00:06:00.976 ==> default: Creating domain with the following settings... 00:06:00.976 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1732109159_271da4cae85ffc7eed73 00:06:00.976 ==> default: -- Domain type: kvm 00:06:00.976 ==> default: -- Cpus: 10 00:06:00.976 ==> default: -- Feature: acpi 00:06:00.976 ==> default: -- Feature: apic 00:06:00.976 ==> default: -- Feature: pae 00:06:00.976 ==> default: -- Memory: 12288M 00:06:00.976 ==> default: -- Memory Backing: hugepages: 00:06:00.976 ==> default: -- Management MAC: 00:06:00.976 ==> default: -- Loader: 00:06:00.976 ==> default: -- Nvram: 00:06:00.976 ==> default: -- Base box: spdk/fedora39 00:06:00.976 ==> default: -- Storage pool: default 00:06:00.976 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1732109159_271da4cae85ffc7eed73.img (20G) 00:06:00.976 ==> default: -- Volume Cache: default 00:06:00.976 ==> default: -- Kernel: 00:06:00.976 ==> default: -- Initrd: 00:06:00.976 ==> default: -- Graphics Type: vnc 00:06:00.976 ==> default: -- Graphics Port: -1 00:06:00.976 ==> default: -- Graphics IP: 127.0.0.1 00:06:00.976 ==> default: -- Graphics Password: Not defined 00:06:00.976 ==> default: -- Video Type: cirrus 00:06:00.976 ==> default: -- Video VRAM: 9216 00:06:00.976 ==> default: -- Sound Type: 00:06:00.976 ==> default: -- Keymap: en-us 00:06:00.976 ==> default: -- TPM Path: 00:06:00.976 ==> default: -- INPUT: type=mouse, bus=ps2 00:06:00.976 ==> default: -- Command line args: 00:06:00.976 ==> default: -> value=-device, 00:06:00.976 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:06:00.976 ==> default: -> value=-drive, 00:06:00.976 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme.img,if=none,id=nvme-0-drive0, 00:06:00.976 ==> default: -> value=-device, 00:06:00.976 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:06:00.976 ==> default: -> value=-device, 00:06:00.976 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:06:00.976 ==> default: -> value=-drive, 00:06:00.976 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:06:00.976 ==> default: -> value=-device, 00:06:00.976 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:06:00.976 ==> default: -> value=-drive, 00:06:00.976 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:06:00.976 ==> default: -> value=-device, 00:06:00.976 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:06:00.976 ==> default: -> value=-drive, 00:06:00.976 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:06:00.977 ==> default: -> value=-device, 00:06:00.977 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:06:01.543 ==> default: Creating shared folders metadata... 00:06:01.543 ==> default: Starting domain. 00:06:03.448 ==> default: Waiting for domain to get an IP address... 00:06:21.531 ==> default: Waiting for SSH to become available... 00:06:21.531 ==> default: Configuring and enabling network interfaces... 00:06:25.723 default: SSH address: 192.168.121.176:22 00:06:25.723 default: SSH username: vagrant 00:06:25.723 default: SSH auth method: private key 00:06:29.010 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:06:38.984 ==> default: Mounting SSHFS shared folder... 00:06:39.550 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:06:39.550 ==> default: Checking Mount.. 00:06:41.452 ==> default: Folder Successfully Mounted! 00:06:41.452 ==> default: Running provisioner: file... 00:06:42.020 default: ~/.gitconfig => .gitconfig 00:06:42.585 00:06:42.585 SUCCESS! 00:06:42.585 00:06:42.585 cd to /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:06:42.585 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:06:42.585 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:06:42.585 00:06:42.593 [Pipeline] } 00:06:42.609 [Pipeline] // stage 00:06:42.617 [Pipeline] dir 00:06:42.618 Running in /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt 00:06:42.620 [Pipeline] { 00:06:42.632 [Pipeline] catchError 00:06:42.633 [Pipeline] { 00:06:42.646 [Pipeline] sh 00:06:42.923 + + vagrantsed ssh-config -ne --host /^Host/,$p vagrant 00:06:42.923 00:06:42.923 + tee ssh_conf 00:06:46.206 Host vagrant 00:06:46.207 HostName 192.168.121.176 00:06:46.207 User vagrant 00:06:46.207 Port 22 00:06:46.207 UserKnownHostsFile /dev/null 00:06:46.207 StrictHostKeyChecking no 00:06:46.207 PasswordAuthentication no 00:06:46.207 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:06:46.207 IdentitiesOnly yes 00:06:46.207 LogLevel FATAL 00:06:46.207 ForwardAgent yes 00:06:46.207 ForwardX11 yes 00:06:46.207 00:06:46.219 [Pipeline] withEnv 00:06:46.221 [Pipeline] { 00:06:46.236 [Pipeline] sh 00:06:46.517 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:06:46.517 source /etc/os-release 00:06:46.517 [[ -e /image.version ]] && img=$(< /image.version) 00:06:46.517 # Minimal, systemd-like check. 00:06:46.517 if [[ -e /.dockerenv ]]; then 00:06:46.517 # Clear garbage from the node's name: 00:06:46.517 # agt-er_autotest_547-896 -> autotest_547-896 00:06:46.517 # $HOSTNAME is the actual container id 00:06:46.517 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:06:46.517 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:06:46.517 # We can assume this is a mount from a host where container is running, 00:06:46.517 # so fetch its hostname to easily identify the target swarm worker. 00:06:46.517 container="$(< /etc/hostname) ($agent)" 00:06:46.517 else 00:06:46.517 # Fallback 00:06:46.517 container=$agent 00:06:46.517 fi 00:06:46.517 fi 00:06:46.517 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:06:46.517 00:06:46.785 [Pipeline] } 00:06:46.806 [Pipeline] // withEnv 00:06:46.816 [Pipeline] setCustomBuildProperty 00:06:46.831 [Pipeline] stage 00:06:46.833 [Pipeline] { (Tests) 00:06:46.852 [Pipeline] sh 00:06:47.135 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:06:47.408 [Pipeline] sh 00:06:47.692 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:06:47.962 [Pipeline] timeout 00:06:47.962 Timeout set to expire in 1 hr 30 min 00:06:47.964 [Pipeline] { 00:06:47.981 [Pipeline] sh 00:06:48.264 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:06:48.831 HEAD is now at 82b85d9ca bdev/malloc: malloc_done() uses switch-case for clean up 00:06:48.842 [Pipeline] sh 00:06:49.121 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:06:49.392 [Pipeline] sh 00:06:49.669 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:06:49.936 [Pipeline] sh 00:06:50.213 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=raid-vg-autotest ./autoruner.sh spdk_repo 00:06:50.471 ++ readlink -f spdk_repo 00:06:50.471 + DIR_ROOT=/home/vagrant/spdk_repo 00:06:50.471 + [[ -n /home/vagrant/spdk_repo ]] 00:06:50.471 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:06:50.471 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:06:50.471 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:06:50.471 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:06:50.471 + [[ -d /home/vagrant/spdk_repo/output ]] 00:06:50.471 + [[ raid-vg-autotest == pkgdep-* ]] 00:06:50.471 + cd /home/vagrant/spdk_repo 00:06:50.471 + source /etc/os-release 00:06:50.471 ++ NAME='Fedora Linux' 00:06:50.471 ++ VERSION='39 (Cloud Edition)' 00:06:50.471 ++ ID=fedora 00:06:50.471 ++ VERSION_ID=39 00:06:50.471 ++ VERSION_CODENAME= 00:06:50.471 ++ PLATFORM_ID=platform:f39 00:06:50.471 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:06:50.471 ++ ANSI_COLOR='0;38;2;60;110;180' 00:06:50.471 ++ LOGO=fedora-logo-icon 00:06:50.471 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:06:50.471 ++ HOME_URL=https://fedoraproject.org/ 00:06:50.471 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:06:50.471 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:06:50.471 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:06:50.471 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:06:50.471 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:06:50.471 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:06:50.471 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:06:50.471 ++ SUPPORT_END=2024-11-12 00:06:50.471 ++ VARIANT='Cloud Edition' 00:06:50.471 ++ VARIANT_ID=cloud 00:06:50.471 + uname -a 00:06:50.471 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:06:50.471 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:06:51.037 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:51.037 Hugepages 00:06:51.037 node hugesize free / total 00:06:51.037 node0 1048576kB 0 / 0 00:06:51.037 node0 2048kB 0 / 0 00:06:51.037 00:06:51.037 Type BDF Vendor Device NUMA Driver Device Block devices 00:06:51.037 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:06:51.037 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:06:51.037 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:06:51.037 + rm -f /tmp/spdk-ld-path 00:06:51.037 + source autorun-spdk.conf 00:06:51.037 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:06:51.037 ++ SPDK_RUN_ASAN=1 00:06:51.037 ++ SPDK_RUN_UBSAN=1 00:06:51.037 ++ SPDK_TEST_RAID=1 00:06:51.037 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:06:51.037 ++ RUN_NIGHTLY=0 00:06:51.037 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:06:51.037 + [[ -n '' ]] 00:06:51.037 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:06:51.037 + for M in /var/spdk/build-*-manifest.txt 00:06:51.037 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:06:51.037 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:06:51.037 + for M in /var/spdk/build-*-manifest.txt 00:06:51.037 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:06:51.037 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:06:51.295 + for M in /var/spdk/build-*-manifest.txt 00:06:51.295 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:06:51.295 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:06:51.295 ++ uname 00:06:51.295 + [[ Linux == \L\i\n\u\x ]] 00:06:51.295 + sudo dmesg -T 00:06:51.295 + sudo dmesg --clear 00:06:51.295 + dmesg_pid=5209 00:06:51.295 + [[ Fedora Linux == FreeBSD ]] 00:06:51.295 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:06:51.295 + sudo dmesg -Tw 00:06:51.295 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:06:51.295 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:06:51.295 + [[ -x /usr/src/fio-static/fio ]] 00:06:51.295 + export FIO_BIN=/usr/src/fio-static/fio 00:06:51.295 + FIO_BIN=/usr/src/fio-static/fio 00:06:51.295 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:06:51.295 + [[ ! -v VFIO_QEMU_BIN ]] 00:06:51.295 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:06:51.295 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:06:51.295 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:06:51.295 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:06:51.295 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:06:51.295 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:06:51.295 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:06:51.295 13:26:50 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:06:51.295 13:26:50 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:06:51.295 13:26:50 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:06:51.295 13:26:50 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_RUN_ASAN=1 00:06:51.295 13:26:50 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_RUN_UBSAN=1 00:06:51.295 13:26:50 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_RAID=1 00:06:51.295 13:26:50 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:06:51.295 13:26:50 -- spdk_repo/autorun-spdk.conf@6 -- $ RUN_NIGHTLY=0 00:06:51.295 13:26:50 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:06:51.295 13:26:50 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:06:51.554 13:26:50 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:06:51.554 13:26:50 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:51.554 13:26:50 -- scripts/common.sh@15 -- $ shopt -s extglob 00:06:51.554 13:26:50 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:06:51.554 13:26:50 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:51.554 13:26:50 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:51.554 13:26:50 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:51.554 13:26:50 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:51.554 13:26:50 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:51.554 13:26:50 -- paths/export.sh@5 -- $ export PATH 00:06:51.554 13:26:50 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:51.554 13:26:50 -- common/autobuild_common.sh@492 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:06:51.554 13:26:50 -- common/autobuild_common.sh@493 -- $ date +%s 00:06:51.554 13:26:50 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1732109210.XXXXXX 00:06:51.554 13:26:50 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1732109210.yuRPoT 00:06:51.554 13:26:50 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:06:51.554 13:26:50 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:06:51.554 13:26:50 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:06:51.554 13:26:50 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:06:51.554 13:26:50 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:06:51.554 13:26:50 -- common/autobuild_common.sh@509 -- $ get_config_params 00:06:51.554 13:26:50 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:06:51.554 13:26:50 -- common/autotest_common.sh@10 -- $ set +x 00:06:51.554 13:26:50 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f' 00:06:51.554 13:26:50 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:06:51.554 13:26:50 -- pm/common@17 -- $ local monitor 00:06:51.554 13:26:50 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:06:51.554 13:26:50 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:06:51.554 13:26:50 -- pm/common@25 -- $ sleep 1 00:06:51.554 13:26:50 -- pm/common@21 -- $ date +%s 00:06:51.554 13:26:50 -- pm/common@21 -- $ date +%s 00:06:51.554 13:26:50 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732109210 00:06:51.554 13:26:50 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732109210 00:06:51.554 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732109210_collect-vmstat.pm.log 00:06:51.554 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732109210_collect-cpu-load.pm.log 00:06:52.487 13:26:51 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:06:52.487 13:26:51 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:06:52.487 13:26:51 -- spdk/autobuild.sh@12 -- $ umask 022 00:06:52.487 13:26:51 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:06:52.487 13:26:51 -- spdk/autobuild.sh@16 -- $ date -u 00:06:52.487 Wed Nov 20 01:26:51 PM UTC 2024 00:06:52.487 13:26:51 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:06:52.487 v25.01-pre-242-g82b85d9ca 00:06:52.487 13:26:51 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:06:52.487 13:26:51 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:06:52.487 13:26:51 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:06:52.487 13:26:51 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:06:52.487 13:26:51 -- common/autotest_common.sh@10 -- $ set +x 00:06:52.487 ************************************ 00:06:52.487 START TEST asan 00:06:52.487 ************************************ 00:06:52.487 using asan 00:06:52.487 13:26:51 asan -- common/autotest_common.sh@1129 -- $ echo 'using asan' 00:06:52.487 00:06:52.487 real 0m0.000s 00:06:52.487 user 0m0.000s 00:06:52.487 sys 0m0.000s 00:06:52.487 ************************************ 00:06:52.487 END TEST asan 00:06:52.487 ************************************ 00:06:52.487 13:26:51 asan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:06:52.487 13:26:51 asan -- common/autotest_common.sh@10 -- $ set +x 00:06:52.487 13:26:51 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:06:52.487 13:26:51 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:06:52.487 13:26:51 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:06:52.487 13:26:51 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:06:52.487 13:26:51 -- common/autotest_common.sh@10 -- $ set +x 00:06:52.745 ************************************ 00:06:52.745 START TEST ubsan 00:06:52.745 ************************************ 00:06:52.745 using ubsan 00:06:52.745 13:26:51 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:06:52.745 00:06:52.745 real 0m0.000s 00:06:52.745 user 0m0.000s 00:06:52.745 sys 0m0.000s 00:06:52.745 13:26:51 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:06:52.745 13:26:51 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:06:52.745 ************************************ 00:06:52.745 END TEST ubsan 00:06:52.745 ************************************ 00:06:52.745 13:26:52 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:06:52.745 13:26:52 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:06:52.745 13:26:52 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:06:52.745 13:26:52 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:06:52.745 13:26:52 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:06:52.745 13:26:52 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:06:52.745 13:26:52 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:06:52.745 13:26:52 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:06:52.745 13:26:52 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f --with-shared 00:06:52.745 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:06:52.745 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:06:53.311 Using 'verbs' RDMA provider 00:07:09.130 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:07:21.322 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:07:21.889 Creating mk/config.mk...done. 00:07:21.889 Creating mk/cc.flags.mk...done. 00:07:21.889 Type 'make' to build. 00:07:21.889 13:27:21 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:07:21.889 13:27:21 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:07:21.889 13:27:21 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:07:21.889 13:27:21 -- common/autotest_common.sh@10 -- $ set +x 00:07:21.889 ************************************ 00:07:21.889 START TEST make 00:07:21.889 ************************************ 00:07:21.889 13:27:21 make -- common/autotest_common.sh@1129 -- $ make -j10 00:07:22.148 make[1]: Nothing to be done for 'all'. 00:07:34.353 The Meson build system 00:07:34.353 Version: 1.5.0 00:07:34.353 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:07:34.353 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:07:34.353 Build type: native build 00:07:34.353 Program cat found: YES (/usr/bin/cat) 00:07:34.353 Project name: DPDK 00:07:34.353 Project version: 24.03.0 00:07:34.353 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:07:34.353 C linker for the host machine: cc ld.bfd 2.40-14 00:07:34.353 Host machine cpu family: x86_64 00:07:34.353 Host machine cpu: x86_64 00:07:34.353 Message: ## Building in Developer Mode ## 00:07:34.353 Program pkg-config found: YES (/usr/bin/pkg-config) 00:07:34.353 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:07:34.353 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:07:34.353 Program python3 found: YES (/usr/bin/python3) 00:07:34.353 Program cat found: YES (/usr/bin/cat) 00:07:34.353 Compiler for C supports arguments -march=native: YES 00:07:34.353 Checking for size of "void *" : 8 00:07:34.353 Checking for size of "void *" : 8 (cached) 00:07:34.353 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:07:34.353 Library m found: YES 00:07:34.353 Library numa found: YES 00:07:34.353 Has header "numaif.h" : YES 00:07:34.353 Library fdt found: NO 00:07:34.353 Library execinfo found: NO 00:07:34.353 Has header "execinfo.h" : YES 00:07:34.353 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:07:34.353 Run-time dependency libarchive found: NO (tried pkgconfig) 00:07:34.353 Run-time dependency libbsd found: NO (tried pkgconfig) 00:07:34.353 Run-time dependency jansson found: NO (tried pkgconfig) 00:07:34.353 Run-time dependency openssl found: YES 3.1.1 00:07:34.353 Run-time dependency libpcap found: YES 1.10.4 00:07:34.353 Has header "pcap.h" with dependency libpcap: YES 00:07:34.353 Compiler for C supports arguments -Wcast-qual: YES 00:07:34.353 Compiler for C supports arguments -Wdeprecated: YES 00:07:34.353 Compiler for C supports arguments -Wformat: YES 00:07:34.353 Compiler for C supports arguments -Wformat-nonliteral: NO 00:07:34.353 Compiler for C supports arguments -Wformat-security: NO 00:07:34.353 Compiler for C supports arguments -Wmissing-declarations: YES 00:07:34.353 Compiler for C supports arguments -Wmissing-prototypes: YES 00:07:34.354 Compiler for C supports arguments -Wnested-externs: YES 00:07:34.354 Compiler for C supports arguments -Wold-style-definition: YES 00:07:34.354 Compiler for C supports arguments -Wpointer-arith: YES 00:07:34.354 Compiler for C supports arguments -Wsign-compare: YES 00:07:34.354 Compiler for C supports arguments -Wstrict-prototypes: YES 00:07:34.354 Compiler for C supports arguments -Wundef: YES 00:07:34.354 Compiler for C supports arguments -Wwrite-strings: YES 00:07:34.354 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:07:34.354 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:07:34.354 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:07:34.354 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:07:34.354 Program objdump found: YES (/usr/bin/objdump) 00:07:34.354 Compiler for C supports arguments -mavx512f: YES 00:07:34.354 Checking if "AVX512 checking" compiles: YES 00:07:34.354 Fetching value of define "__SSE4_2__" : 1 00:07:34.354 Fetching value of define "__AES__" : 1 00:07:34.354 Fetching value of define "__AVX__" : 1 00:07:34.354 Fetching value of define "__AVX2__" : 1 00:07:34.354 Fetching value of define "__AVX512BW__" : 1 00:07:34.354 Fetching value of define "__AVX512CD__" : 1 00:07:34.354 Fetching value of define "__AVX512DQ__" : 1 00:07:34.354 Fetching value of define "__AVX512F__" : 1 00:07:34.354 Fetching value of define "__AVX512VL__" : 1 00:07:34.354 Fetching value of define "__PCLMUL__" : 1 00:07:34.354 Fetching value of define "__RDRND__" : 1 00:07:34.354 Fetching value of define "__RDSEED__" : 1 00:07:34.354 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:07:34.354 Fetching value of define "__znver1__" : (undefined) 00:07:34.354 Fetching value of define "__znver2__" : (undefined) 00:07:34.354 Fetching value of define "__znver3__" : (undefined) 00:07:34.354 Fetching value of define "__znver4__" : (undefined) 00:07:34.354 Library asan found: YES 00:07:34.354 Compiler for C supports arguments -Wno-format-truncation: YES 00:07:34.354 Message: lib/log: Defining dependency "log" 00:07:34.354 Message: lib/kvargs: Defining dependency "kvargs" 00:07:34.354 Message: lib/telemetry: Defining dependency "telemetry" 00:07:34.354 Library rt found: YES 00:07:34.354 Checking for function "getentropy" : NO 00:07:34.354 Message: lib/eal: Defining dependency "eal" 00:07:34.354 Message: lib/ring: Defining dependency "ring" 00:07:34.354 Message: lib/rcu: Defining dependency "rcu" 00:07:34.354 Message: lib/mempool: Defining dependency "mempool" 00:07:34.354 Message: lib/mbuf: Defining dependency "mbuf" 00:07:34.354 Fetching value of define "__PCLMUL__" : 1 (cached) 00:07:34.354 Fetching value of define "__AVX512F__" : 1 (cached) 00:07:34.354 Fetching value of define "__AVX512BW__" : 1 (cached) 00:07:34.354 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:07:34.354 Fetching value of define "__AVX512VL__" : 1 (cached) 00:07:34.354 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:07:34.354 Compiler for C supports arguments -mpclmul: YES 00:07:34.354 Compiler for C supports arguments -maes: YES 00:07:34.354 Compiler for C supports arguments -mavx512f: YES (cached) 00:07:34.354 Compiler for C supports arguments -mavx512bw: YES 00:07:34.354 Compiler for C supports arguments -mavx512dq: YES 00:07:34.354 Compiler for C supports arguments -mavx512vl: YES 00:07:34.354 Compiler for C supports arguments -mvpclmulqdq: YES 00:07:34.354 Compiler for C supports arguments -mavx2: YES 00:07:34.354 Compiler for C supports arguments -mavx: YES 00:07:34.354 Message: lib/net: Defining dependency "net" 00:07:34.354 Message: lib/meter: Defining dependency "meter" 00:07:34.354 Message: lib/ethdev: Defining dependency "ethdev" 00:07:34.354 Message: lib/pci: Defining dependency "pci" 00:07:34.354 Message: lib/cmdline: Defining dependency "cmdline" 00:07:34.354 Message: lib/hash: Defining dependency "hash" 00:07:34.354 Message: lib/timer: Defining dependency "timer" 00:07:34.354 Message: lib/compressdev: Defining dependency "compressdev" 00:07:34.354 Message: lib/cryptodev: Defining dependency "cryptodev" 00:07:34.354 Message: lib/dmadev: Defining dependency "dmadev" 00:07:34.354 Compiler for C supports arguments -Wno-cast-qual: YES 00:07:34.354 Message: lib/power: Defining dependency "power" 00:07:34.354 Message: lib/reorder: Defining dependency "reorder" 00:07:34.354 Message: lib/security: Defining dependency "security" 00:07:34.354 Has header "linux/userfaultfd.h" : YES 00:07:34.354 Has header "linux/vduse.h" : YES 00:07:34.354 Message: lib/vhost: Defining dependency "vhost" 00:07:34.354 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:07:34.354 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:07:34.354 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:07:34.354 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:07:34.354 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:07:34.354 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:07:34.354 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:07:34.354 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:07:34.354 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:07:34.354 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:07:34.354 Program doxygen found: YES (/usr/local/bin/doxygen) 00:07:34.354 Configuring doxy-api-html.conf using configuration 00:07:34.354 Configuring doxy-api-man.conf using configuration 00:07:34.354 Program mandb found: YES (/usr/bin/mandb) 00:07:34.354 Program sphinx-build found: NO 00:07:34.354 Configuring rte_build_config.h using configuration 00:07:34.354 Message: 00:07:34.354 ================= 00:07:34.354 Applications Enabled 00:07:34.354 ================= 00:07:34.354 00:07:34.354 apps: 00:07:34.354 00:07:34.354 00:07:34.354 Message: 00:07:34.354 ================= 00:07:34.354 Libraries Enabled 00:07:34.354 ================= 00:07:34.354 00:07:34.354 libs: 00:07:34.354 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:07:34.354 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:07:34.354 cryptodev, dmadev, power, reorder, security, vhost, 00:07:34.354 00:07:34.354 Message: 00:07:34.354 =============== 00:07:34.354 Drivers Enabled 00:07:34.354 =============== 00:07:34.354 00:07:34.354 common: 00:07:34.354 00:07:34.354 bus: 00:07:34.354 pci, vdev, 00:07:34.354 mempool: 00:07:34.354 ring, 00:07:34.354 dma: 00:07:34.354 00:07:34.354 net: 00:07:34.354 00:07:34.354 crypto: 00:07:34.354 00:07:34.354 compress: 00:07:34.354 00:07:34.354 vdpa: 00:07:34.354 00:07:34.354 00:07:34.354 Message: 00:07:34.354 ================= 00:07:34.354 Content Skipped 00:07:34.354 ================= 00:07:34.354 00:07:34.354 apps: 00:07:34.354 dumpcap: explicitly disabled via build config 00:07:34.354 graph: explicitly disabled via build config 00:07:34.354 pdump: explicitly disabled via build config 00:07:34.354 proc-info: explicitly disabled via build config 00:07:34.354 test-acl: explicitly disabled via build config 00:07:34.354 test-bbdev: explicitly disabled via build config 00:07:34.354 test-cmdline: explicitly disabled via build config 00:07:34.354 test-compress-perf: explicitly disabled via build config 00:07:34.354 test-crypto-perf: explicitly disabled via build config 00:07:34.354 test-dma-perf: explicitly disabled via build config 00:07:34.354 test-eventdev: explicitly disabled via build config 00:07:34.354 test-fib: explicitly disabled via build config 00:07:34.354 test-flow-perf: explicitly disabled via build config 00:07:34.354 test-gpudev: explicitly disabled via build config 00:07:34.354 test-mldev: explicitly disabled via build config 00:07:34.354 test-pipeline: explicitly disabled via build config 00:07:34.354 test-pmd: explicitly disabled via build config 00:07:34.354 test-regex: explicitly disabled via build config 00:07:34.354 test-sad: explicitly disabled via build config 00:07:34.354 test-security-perf: explicitly disabled via build config 00:07:34.354 00:07:34.354 libs: 00:07:34.354 argparse: explicitly disabled via build config 00:07:34.354 metrics: explicitly disabled via build config 00:07:34.354 acl: explicitly disabled via build config 00:07:34.354 bbdev: explicitly disabled via build config 00:07:34.354 bitratestats: explicitly disabled via build config 00:07:34.354 bpf: explicitly disabled via build config 00:07:34.354 cfgfile: explicitly disabled via build config 00:07:34.354 distributor: explicitly disabled via build config 00:07:34.354 efd: explicitly disabled via build config 00:07:34.354 eventdev: explicitly disabled via build config 00:07:34.354 dispatcher: explicitly disabled via build config 00:07:34.355 gpudev: explicitly disabled via build config 00:07:34.355 gro: explicitly disabled via build config 00:07:34.355 gso: explicitly disabled via build config 00:07:34.355 ip_frag: explicitly disabled via build config 00:07:34.355 jobstats: explicitly disabled via build config 00:07:34.355 latencystats: explicitly disabled via build config 00:07:34.355 lpm: explicitly disabled via build config 00:07:34.355 member: explicitly disabled via build config 00:07:34.355 pcapng: explicitly disabled via build config 00:07:34.355 rawdev: explicitly disabled via build config 00:07:34.355 regexdev: explicitly disabled via build config 00:07:34.355 mldev: explicitly disabled via build config 00:07:34.355 rib: explicitly disabled via build config 00:07:34.355 sched: explicitly disabled via build config 00:07:34.355 stack: explicitly disabled via build config 00:07:34.355 ipsec: explicitly disabled via build config 00:07:34.355 pdcp: explicitly disabled via build config 00:07:34.355 fib: explicitly disabled via build config 00:07:34.355 port: explicitly disabled via build config 00:07:34.355 pdump: explicitly disabled via build config 00:07:34.355 table: explicitly disabled via build config 00:07:34.355 pipeline: explicitly disabled via build config 00:07:34.355 graph: explicitly disabled via build config 00:07:34.355 node: explicitly disabled via build config 00:07:34.355 00:07:34.355 drivers: 00:07:34.355 common/cpt: not in enabled drivers build config 00:07:34.355 common/dpaax: not in enabled drivers build config 00:07:34.355 common/iavf: not in enabled drivers build config 00:07:34.355 common/idpf: not in enabled drivers build config 00:07:34.355 common/ionic: not in enabled drivers build config 00:07:34.355 common/mvep: not in enabled drivers build config 00:07:34.355 common/octeontx: not in enabled drivers build config 00:07:34.355 bus/auxiliary: not in enabled drivers build config 00:07:34.355 bus/cdx: not in enabled drivers build config 00:07:34.355 bus/dpaa: not in enabled drivers build config 00:07:34.355 bus/fslmc: not in enabled drivers build config 00:07:34.355 bus/ifpga: not in enabled drivers build config 00:07:34.355 bus/platform: not in enabled drivers build config 00:07:34.355 bus/uacce: not in enabled drivers build config 00:07:34.355 bus/vmbus: not in enabled drivers build config 00:07:34.355 common/cnxk: not in enabled drivers build config 00:07:34.355 common/mlx5: not in enabled drivers build config 00:07:34.355 common/nfp: not in enabled drivers build config 00:07:34.355 common/nitrox: not in enabled drivers build config 00:07:34.355 common/qat: not in enabled drivers build config 00:07:34.355 common/sfc_efx: not in enabled drivers build config 00:07:34.355 mempool/bucket: not in enabled drivers build config 00:07:34.355 mempool/cnxk: not in enabled drivers build config 00:07:34.355 mempool/dpaa: not in enabled drivers build config 00:07:34.355 mempool/dpaa2: not in enabled drivers build config 00:07:34.355 mempool/octeontx: not in enabled drivers build config 00:07:34.355 mempool/stack: not in enabled drivers build config 00:07:34.355 dma/cnxk: not in enabled drivers build config 00:07:34.355 dma/dpaa: not in enabled drivers build config 00:07:34.355 dma/dpaa2: not in enabled drivers build config 00:07:34.355 dma/hisilicon: not in enabled drivers build config 00:07:34.355 dma/idxd: not in enabled drivers build config 00:07:34.355 dma/ioat: not in enabled drivers build config 00:07:34.355 dma/skeleton: not in enabled drivers build config 00:07:34.355 net/af_packet: not in enabled drivers build config 00:07:34.355 net/af_xdp: not in enabled drivers build config 00:07:34.355 net/ark: not in enabled drivers build config 00:07:34.355 net/atlantic: not in enabled drivers build config 00:07:34.355 net/avp: not in enabled drivers build config 00:07:34.355 net/axgbe: not in enabled drivers build config 00:07:34.355 net/bnx2x: not in enabled drivers build config 00:07:34.355 net/bnxt: not in enabled drivers build config 00:07:34.355 net/bonding: not in enabled drivers build config 00:07:34.355 net/cnxk: not in enabled drivers build config 00:07:34.355 net/cpfl: not in enabled drivers build config 00:07:34.355 net/cxgbe: not in enabled drivers build config 00:07:34.355 net/dpaa: not in enabled drivers build config 00:07:34.355 net/dpaa2: not in enabled drivers build config 00:07:34.355 net/e1000: not in enabled drivers build config 00:07:34.355 net/ena: not in enabled drivers build config 00:07:34.355 net/enetc: not in enabled drivers build config 00:07:34.355 net/enetfec: not in enabled drivers build config 00:07:34.355 net/enic: not in enabled drivers build config 00:07:34.355 net/failsafe: not in enabled drivers build config 00:07:34.355 net/fm10k: not in enabled drivers build config 00:07:34.355 net/gve: not in enabled drivers build config 00:07:34.355 net/hinic: not in enabled drivers build config 00:07:34.355 net/hns3: not in enabled drivers build config 00:07:34.355 net/i40e: not in enabled drivers build config 00:07:34.355 net/iavf: not in enabled drivers build config 00:07:34.355 net/ice: not in enabled drivers build config 00:07:34.355 net/idpf: not in enabled drivers build config 00:07:34.355 net/igc: not in enabled drivers build config 00:07:34.355 net/ionic: not in enabled drivers build config 00:07:34.355 net/ipn3ke: not in enabled drivers build config 00:07:34.355 net/ixgbe: not in enabled drivers build config 00:07:34.355 net/mana: not in enabled drivers build config 00:07:34.355 net/memif: not in enabled drivers build config 00:07:34.355 net/mlx4: not in enabled drivers build config 00:07:34.355 net/mlx5: not in enabled drivers build config 00:07:34.355 net/mvneta: not in enabled drivers build config 00:07:34.355 net/mvpp2: not in enabled drivers build config 00:07:34.355 net/netvsc: not in enabled drivers build config 00:07:34.355 net/nfb: not in enabled drivers build config 00:07:34.355 net/nfp: not in enabled drivers build config 00:07:34.355 net/ngbe: not in enabled drivers build config 00:07:34.355 net/null: not in enabled drivers build config 00:07:34.355 net/octeontx: not in enabled drivers build config 00:07:34.355 net/octeon_ep: not in enabled drivers build config 00:07:34.355 net/pcap: not in enabled drivers build config 00:07:34.355 net/pfe: not in enabled drivers build config 00:07:34.355 net/qede: not in enabled drivers build config 00:07:34.355 net/ring: not in enabled drivers build config 00:07:34.355 net/sfc: not in enabled drivers build config 00:07:34.355 net/softnic: not in enabled drivers build config 00:07:34.355 net/tap: not in enabled drivers build config 00:07:34.355 net/thunderx: not in enabled drivers build config 00:07:34.355 net/txgbe: not in enabled drivers build config 00:07:34.355 net/vdev_netvsc: not in enabled drivers build config 00:07:34.355 net/vhost: not in enabled drivers build config 00:07:34.355 net/virtio: not in enabled drivers build config 00:07:34.355 net/vmxnet3: not in enabled drivers build config 00:07:34.355 raw/*: missing internal dependency, "rawdev" 00:07:34.355 crypto/armv8: not in enabled drivers build config 00:07:34.355 crypto/bcmfs: not in enabled drivers build config 00:07:34.355 crypto/caam_jr: not in enabled drivers build config 00:07:34.355 crypto/ccp: not in enabled drivers build config 00:07:34.355 crypto/cnxk: not in enabled drivers build config 00:07:34.355 crypto/dpaa_sec: not in enabled drivers build config 00:07:34.355 crypto/dpaa2_sec: not in enabled drivers build config 00:07:34.355 crypto/ipsec_mb: not in enabled drivers build config 00:07:34.355 crypto/mlx5: not in enabled drivers build config 00:07:34.355 crypto/mvsam: not in enabled drivers build config 00:07:34.355 crypto/nitrox: not in enabled drivers build config 00:07:34.355 crypto/null: not in enabled drivers build config 00:07:34.355 crypto/octeontx: not in enabled drivers build config 00:07:34.355 crypto/openssl: not in enabled drivers build config 00:07:34.355 crypto/scheduler: not in enabled drivers build config 00:07:34.355 crypto/uadk: not in enabled drivers build config 00:07:34.355 crypto/virtio: not in enabled drivers build config 00:07:34.355 compress/isal: not in enabled drivers build config 00:07:34.355 compress/mlx5: not in enabled drivers build config 00:07:34.355 compress/nitrox: not in enabled drivers build config 00:07:34.355 compress/octeontx: not in enabled drivers build config 00:07:34.355 compress/zlib: not in enabled drivers build config 00:07:34.355 regex/*: missing internal dependency, "regexdev" 00:07:34.355 ml/*: missing internal dependency, "mldev" 00:07:34.355 vdpa/ifc: not in enabled drivers build config 00:07:34.355 vdpa/mlx5: not in enabled drivers build config 00:07:34.355 vdpa/nfp: not in enabled drivers build config 00:07:34.355 vdpa/sfc: not in enabled drivers build config 00:07:34.355 event/*: missing internal dependency, "eventdev" 00:07:34.355 baseband/*: missing internal dependency, "bbdev" 00:07:34.355 gpu/*: missing internal dependency, "gpudev" 00:07:34.355 00:07:34.356 00:07:34.922 Build targets in project: 85 00:07:34.922 00:07:34.922 DPDK 24.03.0 00:07:34.922 00:07:34.922 User defined options 00:07:34.922 buildtype : debug 00:07:34.922 default_library : shared 00:07:34.922 libdir : lib 00:07:34.922 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:07:34.922 b_sanitize : address 00:07:34.922 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:07:34.922 c_link_args : 00:07:34.922 cpu_instruction_set: native 00:07:34.922 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:07:34.922 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:07:34.922 enable_docs : false 00:07:34.922 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:07:34.922 enable_kmods : false 00:07:34.922 max_lcores : 128 00:07:34.922 tests : false 00:07:34.922 00:07:34.922 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:07:35.489 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:07:35.489 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:07:35.489 [2/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:07:35.489 [3/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:07:35.489 [4/268] Linking static target lib/librte_kvargs.a 00:07:35.489 [5/268] Linking static target lib/librte_log.a 00:07:35.489 [6/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:07:36.056 [7/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:07:36.056 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:07:36.056 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:07:36.057 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:07:36.057 [11/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:07:36.057 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:07:36.057 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:07:36.057 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:07:36.057 [15/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:07:36.316 [16/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:07:36.316 [17/268] Linking static target lib/librte_telemetry.a 00:07:36.316 [18/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:07:36.575 [19/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:07:36.834 [20/268] Linking target lib/librte_log.so.24.1 00:07:36.834 [21/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:07:36.834 [22/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:07:36.834 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:07:36.834 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:07:36.834 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:07:36.834 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:07:36.834 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:07:37.092 [28/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:07:37.092 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:07:37.092 [30/268] Linking target lib/librte_kvargs.so.24.1 00:07:37.092 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:07:37.092 [32/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:07:37.092 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:07:37.092 [34/268] Linking target lib/librte_telemetry.so.24.1 00:07:37.349 [35/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:07:37.350 [36/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:07:37.607 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:07:37.607 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:07:37.607 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:07:37.607 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:07:37.607 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:07:37.607 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:07:37.607 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:07:37.607 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:07:37.866 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:07:37.866 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:07:37.866 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:07:37.866 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:07:37.866 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:07:38.125 [50/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:07:38.125 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:07:38.125 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:07:38.384 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:07:38.384 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:07:38.384 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:07:38.384 [56/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:07:38.384 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:07:38.384 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:07:38.384 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:07:38.384 [60/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:07:38.642 [61/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:07:38.642 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:07:38.642 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:07:38.642 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:07:38.900 [65/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:07:38.900 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:07:38.900 [67/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:07:38.900 [68/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:07:38.900 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:07:38.900 [70/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:07:39.159 [71/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:07:39.159 [72/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:07:39.159 [73/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:07:39.159 [74/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:07:39.159 [75/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:07:39.159 [76/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:07:39.159 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:07:39.159 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:07:39.417 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:07:39.417 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:07:39.417 [81/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:07:39.417 [82/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:07:39.417 [83/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:07:39.676 [84/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:07:39.676 [85/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:07:39.676 [86/268] Linking static target lib/librte_ring.a 00:07:39.676 [87/268] Linking static target lib/librte_eal.a 00:07:39.676 [88/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:07:39.934 [89/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:07:39.934 [90/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:07:39.934 [91/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:07:39.934 [92/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:07:39.934 [93/268] Linking static target lib/librte_rcu.a 00:07:39.934 [94/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:07:40.192 [95/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:07:40.192 [96/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:07:40.192 [97/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:07:40.192 [98/268] Linking static target lib/librte_mempool.a 00:07:40.474 [99/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:07:40.474 [100/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:07:40.474 [101/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:07:40.474 [102/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:07:40.474 [103/268] Linking static target lib/librte_mbuf.a 00:07:40.474 [104/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:07:40.474 [105/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:07:40.474 [106/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:07:40.474 [107/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:07:40.731 [108/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:07:40.731 [109/268] Linking static target lib/librte_net.a 00:07:40.731 [110/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:07:40.731 [111/268] Linking static target lib/librte_meter.a 00:07:40.989 [112/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:07:40.989 [113/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:07:40.989 [114/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:07:40.989 [115/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:07:41.248 [116/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:07:41.248 [117/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:07:41.505 [118/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:07:41.505 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:07:41.505 [120/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:07:41.764 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:07:41.764 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:07:42.022 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:07:42.280 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:07:42.280 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:07:42.280 [126/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:07:42.280 [127/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:07:42.280 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:07:42.280 [129/268] Linking static target lib/librte_pci.a 00:07:42.280 [130/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:07:42.280 [131/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:07:42.537 [132/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:07:42.537 [133/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:07:42.537 [134/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:07:42.537 [135/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:07:42.537 [136/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:07:42.537 [137/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:07:42.537 [138/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:07:42.795 [139/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:07:42.795 [140/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:07:42.795 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:07:42.795 [142/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:07:42.795 [143/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:07:42.795 [144/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:07:42.795 [145/268] Linking static target lib/librte_cmdline.a 00:07:42.795 [146/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:07:42.795 [147/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:07:43.360 [148/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:07:43.360 [149/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:07:43.360 [150/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:07:43.360 [151/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:07:43.360 [152/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:07:43.617 [153/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:07:43.617 [154/268] Linking static target lib/librte_timer.a 00:07:43.617 [155/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:07:43.876 [156/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:07:43.876 [157/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:07:44.135 [158/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:07:44.135 [159/268] Linking static target lib/librte_hash.a 00:07:44.135 [160/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:07:44.135 [161/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:07:44.135 [162/268] Linking static target lib/librte_compressdev.a 00:07:44.135 [163/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:07:44.135 [164/268] Linking static target lib/librte_ethdev.a 00:07:44.135 [165/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:07:44.394 [166/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:07:44.394 [167/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:07:44.394 [168/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:07:44.394 [169/268] Linking static target lib/librte_dmadev.a 00:07:44.653 [170/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:07:44.653 [171/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:07:44.653 [172/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:07:44.653 [173/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:07:44.921 [174/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:07:45.178 [175/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:07:45.178 [176/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:07:45.178 [177/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:07:45.178 [178/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:07:45.178 [179/268] Linking static target lib/librte_cryptodev.a 00:07:45.178 [180/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:07:45.436 [181/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:07:45.436 [182/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:07:45.436 [183/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:07:45.436 [184/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:07:45.695 [185/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:07:45.695 [186/268] Linking static target lib/librte_power.a 00:07:45.953 [187/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:07:45.953 [188/268] Linking static target lib/librte_reorder.a 00:07:45.953 [189/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:07:45.953 [190/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:07:45.953 [191/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:07:45.953 [192/268] Linking static target lib/librte_security.a 00:07:46.212 [193/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:07:46.470 [194/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:07:46.470 [195/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:07:46.732 [196/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:07:46.990 [197/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:07:46.990 [198/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:07:46.990 [199/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:07:46.990 [200/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:07:46.990 [201/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:07:47.250 [202/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:07:47.508 [203/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:07:47.508 [204/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:07:47.508 [205/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:07:47.508 [206/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:07:47.766 [207/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:07:47.766 [208/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:07:47.766 [209/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:07:47.766 [210/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:07:47.766 [211/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:07:48.024 [212/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:07:48.024 [213/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:07:48.024 [214/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:07:48.024 [215/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:07:48.024 [216/268] Linking static target drivers/librte_bus_pci.a 00:07:48.024 [217/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:07:48.024 [218/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:07:48.024 [219/268] Linking static target drivers/librte_bus_vdev.a 00:07:48.283 [220/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:07:48.283 [221/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:07:48.542 [222/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:07:48.542 [223/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:07:48.542 [224/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:07:48.542 [225/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:07:48.542 [226/268] Linking static target drivers/librte_mempool_ring.a 00:07:48.542 [227/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:07:49.500 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:07:52.791 [229/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:07:52.791 [230/268] Linking target lib/librte_eal.so.24.1 00:07:52.791 [231/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:07:52.791 [232/268] Linking target lib/librte_meter.so.24.1 00:07:52.791 [233/268] Linking target lib/librte_pci.so.24.1 00:07:52.791 [234/268] Linking target lib/librte_ring.so.24.1 00:07:52.791 [235/268] Linking target lib/librte_timer.so.24.1 00:07:52.791 [236/268] Linking target lib/librte_dmadev.so.24.1 00:07:52.791 [237/268] Linking target drivers/librte_bus_vdev.so.24.1 00:07:52.791 [238/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:07:52.791 [239/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:07:52.791 [240/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:07:52.791 [241/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:07:52.791 [242/268] Linking target drivers/librte_bus_pci.so.24.1 00:07:52.791 [243/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:07:52.791 [244/268] Linking target lib/librte_rcu.so.24.1 00:07:52.791 [245/268] Linking target lib/librte_mempool.so.24.1 00:07:52.791 [246/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:07:52.791 [247/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:07:52.791 [248/268] Linking target drivers/librte_mempool_ring.so.24.1 00:07:52.791 [249/268] Linking target lib/librte_mbuf.so.24.1 00:07:53.048 [250/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:07:53.048 [251/268] Linking target lib/librte_reorder.so.24.1 00:07:53.048 [252/268] Linking target lib/librte_net.so.24.1 00:07:53.048 [253/268] Linking target lib/librte_compressdev.so.24.1 00:07:53.048 [254/268] Linking target lib/librte_cryptodev.so.24.1 00:07:53.306 [255/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:07:53.306 [256/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:07:53.306 [257/268] Linking target lib/librte_cmdline.so.24.1 00:07:53.306 [258/268] Linking target lib/librte_hash.so.24.1 00:07:53.306 [259/268] Linking target lib/librte_security.so.24.1 00:07:53.306 [260/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:07:53.564 [261/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:07:53.564 [262/268] Linking static target lib/librte_vhost.a 00:07:53.564 [263/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:07:53.564 [264/268] Linking target lib/librte_ethdev.so.24.1 00:07:53.822 [265/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:07:53.822 [266/268] Linking target lib/librte_power.so.24.1 00:07:56.394 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:07:56.394 [268/268] Linking target lib/librte_vhost.so.24.1 00:07:56.394 INFO: autodetecting backend as ninja 00:07:56.394 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:08:14.477 CC lib/ut/ut.o 00:08:14.477 CC lib/ut_mock/mock.o 00:08:14.477 CC lib/log/log.o 00:08:14.477 CC lib/log/log_flags.o 00:08:14.477 CC lib/log/log_deprecated.o 00:08:14.477 LIB libspdk_ut.a 00:08:14.477 LIB libspdk_ut_mock.a 00:08:14.477 LIB libspdk_log.a 00:08:14.477 SO libspdk_ut.so.2.0 00:08:14.477 SO libspdk_ut_mock.so.6.0 00:08:14.477 SO libspdk_log.so.7.1 00:08:14.477 SYMLINK libspdk_ut.so 00:08:14.477 SYMLINK libspdk_ut_mock.so 00:08:14.477 SYMLINK libspdk_log.so 00:08:14.477 CC lib/dma/dma.o 00:08:14.477 CC lib/util/base64.o 00:08:14.477 CC lib/util/bit_array.o 00:08:14.477 CC lib/ioat/ioat.o 00:08:14.477 CC lib/util/cpuset.o 00:08:14.477 CXX lib/trace_parser/trace.o 00:08:14.477 CC lib/util/crc16.o 00:08:14.477 CC lib/util/crc32.o 00:08:14.477 CC lib/util/crc32c.o 00:08:14.477 CC lib/vfio_user/host/vfio_user_pci.o 00:08:14.477 CC lib/util/crc32_ieee.o 00:08:14.477 CC lib/util/crc64.o 00:08:14.477 CC lib/vfio_user/host/vfio_user.o 00:08:14.477 CC lib/util/dif.o 00:08:14.477 CC lib/util/fd.o 00:08:14.477 LIB libspdk_dma.a 00:08:14.477 CC lib/util/fd_group.o 00:08:14.477 SO libspdk_dma.so.5.0 00:08:14.477 CC lib/util/file.o 00:08:14.477 CC lib/util/hexlify.o 00:08:14.477 LIB libspdk_ioat.a 00:08:14.477 SYMLINK libspdk_dma.so 00:08:14.477 CC lib/util/iov.o 00:08:14.478 SO libspdk_ioat.so.7.0 00:08:14.478 SYMLINK libspdk_ioat.so 00:08:14.478 CC lib/util/math.o 00:08:14.478 CC lib/util/net.o 00:08:14.478 CC lib/util/pipe.o 00:08:14.478 LIB libspdk_vfio_user.a 00:08:14.478 CC lib/util/strerror_tls.o 00:08:14.478 CC lib/util/string.o 00:08:14.478 SO libspdk_vfio_user.so.5.0 00:08:14.478 CC lib/util/uuid.o 00:08:14.478 CC lib/util/xor.o 00:08:14.478 SYMLINK libspdk_vfio_user.so 00:08:14.478 CC lib/util/zipf.o 00:08:14.478 CC lib/util/md5.o 00:08:14.735 LIB libspdk_util.a 00:08:14.735 LIB libspdk_trace_parser.a 00:08:14.735 SO libspdk_util.so.10.1 00:08:14.994 SO libspdk_trace_parser.so.6.0 00:08:14.994 SYMLINK libspdk_trace_parser.so 00:08:14.994 SYMLINK libspdk_util.so 00:08:15.252 CC lib/vmd/vmd.o 00:08:15.252 CC lib/vmd/led.o 00:08:15.252 CC lib/json/json_parse.o 00:08:15.252 CC lib/json/json_util.o 00:08:15.252 CC lib/json/json_write.o 00:08:15.252 CC lib/env_dpdk/memory.o 00:08:15.252 CC lib/env_dpdk/env.o 00:08:15.252 CC lib/rdma_utils/rdma_utils.o 00:08:15.252 CC lib/conf/conf.o 00:08:15.252 CC lib/idxd/idxd.o 00:08:15.511 CC lib/idxd/idxd_user.o 00:08:15.511 LIB libspdk_conf.a 00:08:15.511 CC lib/env_dpdk/pci.o 00:08:15.511 SO libspdk_conf.so.6.0 00:08:15.511 LIB libspdk_rdma_utils.a 00:08:15.511 CC lib/env_dpdk/init.o 00:08:15.511 SO libspdk_rdma_utils.so.1.0 00:08:15.511 SYMLINK libspdk_conf.so 00:08:15.511 CC lib/env_dpdk/threads.o 00:08:15.511 LIB libspdk_json.a 00:08:15.511 SO libspdk_json.so.6.0 00:08:15.769 SYMLINK libspdk_rdma_utils.so 00:08:15.769 CC lib/env_dpdk/pci_ioat.o 00:08:15.769 SYMLINK libspdk_json.so 00:08:15.769 CC lib/env_dpdk/pci_virtio.o 00:08:15.769 CC lib/env_dpdk/pci_vmd.o 00:08:15.769 CC lib/rdma_provider/common.o 00:08:15.770 CC lib/env_dpdk/pci_idxd.o 00:08:15.770 CC lib/rdma_provider/rdma_provider_verbs.o 00:08:16.028 CC lib/env_dpdk/pci_event.o 00:08:16.028 CC lib/env_dpdk/sigbus_handler.o 00:08:16.028 CC lib/env_dpdk/pci_dpdk.o 00:08:16.028 CC lib/jsonrpc/jsonrpc_server.o 00:08:16.028 CC lib/env_dpdk/pci_dpdk_2207.o 00:08:16.028 CC lib/idxd/idxd_kernel.o 00:08:16.028 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:08:16.028 CC lib/jsonrpc/jsonrpc_client.o 00:08:16.028 LIB libspdk_rdma_provider.a 00:08:16.028 LIB libspdk_vmd.a 00:08:16.028 SO libspdk_rdma_provider.so.7.0 00:08:16.028 CC lib/env_dpdk/pci_dpdk_2211.o 00:08:16.028 SO libspdk_vmd.so.6.0 00:08:16.286 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:08:16.286 LIB libspdk_idxd.a 00:08:16.286 SYMLINK libspdk_rdma_provider.so 00:08:16.286 SYMLINK libspdk_vmd.so 00:08:16.286 SO libspdk_idxd.so.12.1 00:08:16.286 SYMLINK libspdk_idxd.so 00:08:16.286 LIB libspdk_jsonrpc.a 00:08:16.545 SO libspdk_jsonrpc.so.6.0 00:08:16.545 SYMLINK libspdk_jsonrpc.so 00:08:17.114 CC lib/rpc/rpc.o 00:08:17.114 LIB libspdk_env_dpdk.a 00:08:17.114 SO libspdk_env_dpdk.so.15.1 00:08:17.114 LIB libspdk_rpc.a 00:08:17.373 SYMLINK libspdk_env_dpdk.so 00:08:17.373 SO libspdk_rpc.so.6.0 00:08:17.373 SYMLINK libspdk_rpc.so 00:08:17.631 CC lib/keyring/keyring.o 00:08:17.631 CC lib/notify/notify_rpc.o 00:08:17.631 CC lib/keyring/keyring_rpc.o 00:08:17.631 CC lib/notify/notify.o 00:08:17.631 CC lib/trace/trace_flags.o 00:08:17.631 CC lib/trace/trace.o 00:08:17.631 CC lib/trace/trace_rpc.o 00:08:17.890 LIB libspdk_notify.a 00:08:17.890 SO libspdk_notify.so.6.0 00:08:17.890 LIB libspdk_keyring.a 00:08:17.890 SYMLINK libspdk_notify.so 00:08:17.890 LIB libspdk_trace.a 00:08:18.148 SO libspdk_trace.so.11.0 00:08:18.148 SO libspdk_keyring.so.2.0 00:08:18.148 SYMLINK libspdk_keyring.so 00:08:18.148 SYMLINK libspdk_trace.so 00:08:18.407 CC lib/thread/thread.o 00:08:18.407 CC lib/thread/iobuf.o 00:08:18.407 CC lib/sock/sock_rpc.o 00:08:18.407 CC lib/sock/sock.o 00:08:18.973 LIB libspdk_sock.a 00:08:18.973 SO libspdk_sock.so.10.0 00:08:18.973 SYMLINK libspdk_sock.so 00:08:19.539 CC lib/nvme/nvme_ctrlr_cmd.o 00:08:19.539 CC lib/nvme/nvme_ctrlr.o 00:08:19.539 CC lib/nvme/nvme_fabric.o 00:08:19.539 CC lib/nvme/nvme_ns.o 00:08:19.539 CC lib/nvme/nvme_ns_cmd.o 00:08:19.539 CC lib/nvme/nvme_pcie.o 00:08:19.539 CC lib/nvme/nvme_pcie_common.o 00:08:19.539 CC lib/nvme/nvme.o 00:08:19.539 CC lib/nvme/nvme_qpair.o 00:08:20.106 LIB libspdk_thread.a 00:08:20.106 CC lib/nvme/nvme_quirks.o 00:08:20.106 CC lib/nvme/nvme_transport.o 00:08:20.365 SO libspdk_thread.so.11.0 00:08:20.365 CC lib/nvme/nvme_discovery.o 00:08:20.365 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:08:20.365 SYMLINK libspdk_thread.so 00:08:20.365 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:08:20.365 CC lib/nvme/nvme_tcp.o 00:08:20.365 CC lib/nvme/nvme_opal.o 00:08:20.365 CC lib/nvme/nvme_io_msg.o 00:08:20.624 CC lib/nvme/nvme_poll_group.o 00:08:20.883 CC lib/nvme/nvme_zns.o 00:08:20.883 CC lib/accel/accel.o 00:08:20.883 CC lib/blob/blobstore.o 00:08:20.883 CC lib/accel/accel_rpc.o 00:08:21.142 CC lib/accel/accel_sw.o 00:08:21.142 CC lib/init/json_config.o 00:08:21.142 CC lib/nvme/nvme_stubs.o 00:08:21.142 CC lib/nvme/nvme_auth.o 00:08:21.142 CC lib/nvme/nvme_cuse.o 00:08:21.401 CC lib/blob/request.o 00:08:21.401 CC lib/init/subsystem.o 00:08:21.401 CC lib/blob/zeroes.o 00:08:21.401 CC lib/init/subsystem_rpc.o 00:08:21.401 CC lib/init/rpc.o 00:08:21.660 CC lib/blob/blob_bs_dev.o 00:08:21.660 LIB libspdk_init.a 00:08:21.660 CC lib/virtio/virtio.o 00:08:21.660 SO libspdk_init.so.6.0 00:08:21.660 CC lib/fsdev/fsdev.o 00:08:21.918 SYMLINK libspdk_init.so 00:08:21.918 CC lib/fsdev/fsdev_io.o 00:08:21.918 CC lib/fsdev/fsdev_rpc.o 00:08:21.918 CC lib/nvme/nvme_rdma.o 00:08:21.918 CC lib/virtio/virtio_vhost_user.o 00:08:21.918 CC lib/event/app.o 00:08:21.918 CC lib/virtio/virtio_vfio_user.o 00:08:22.176 CC lib/event/reactor.o 00:08:22.176 CC lib/event/log_rpc.o 00:08:22.176 LIB libspdk_accel.a 00:08:22.176 CC lib/virtio/virtio_pci.o 00:08:22.176 SO libspdk_accel.so.16.0 00:08:22.176 SYMLINK libspdk_accel.so 00:08:22.176 CC lib/event/app_rpc.o 00:08:22.434 CC lib/event/scheduler_static.o 00:08:22.434 LIB libspdk_fsdev.a 00:08:22.434 SO libspdk_fsdev.so.2.0 00:08:22.434 LIB libspdk_virtio.a 00:08:22.434 CC lib/bdev/bdev_rpc.o 00:08:22.434 CC lib/bdev/bdev.o 00:08:22.434 CC lib/bdev/bdev_zone.o 00:08:22.434 CC lib/bdev/part.o 00:08:22.434 SO libspdk_virtio.so.7.0 00:08:22.434 SYMLINK libspdk_fsdev.so 00:08:22.434 CC lib/bdev/scsi_nvme.o 00:08:22.692 LIB libspdk_event.a 00:08:22.692 SYMLINK libspdk_virtio.so 00:08:22.692 SO libspdk_event.so.14.0 00:08:22.692 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:08:22.692 SYMLINK libspdk_event.so 00:08:23.261 LIB libspdk_nvme.a 00:08:23.519 LIB libspdk_fuse_dispatcher.a 00:08:23.519 SO libspdk_fuse_dispatcher.so.1.0 00:08:23.519 SO libspdk_nvme.so.15.0 00:08:23.519 SYMLINK libspdk_fuse_dispatcher.so 00:08:23.777 SYMLINK libspdk_nvme.so 00:08:24.713 LIB libspdk_blob.a 00:08:24.713 SO libspdk_blob.so.11.0 00:08:24.973 SYMLINK libspdk_blob.so 00:08:25.233 CC lib/blobfs/blobfs.o 00:08:25.233 CC lib/lvol/lvol.o 00:08:25.233 CC lib/blobfs/tree.o 00:08:25.495 LIB libspdk_bdev.a 00:08:25.495 SO libspdk_bdev.so.17.0 00:08:25.759 SYMLINK libspdk_bdev.so 00:08:25.759 CC lib/nvmf/ctrlr.o 00:08:25.759 CC lib/nvmf/ctrlr_discovery.o 00:08:25.759 CC lib/nvmf/subsystem.o 00:08:25.759 CC lib/nvmf/ctrlr_bdev.o 00:08:26.020 CC lib/scsi/dev.o 00:08:26.020 CC lib/nbd/nbd.o 00:08:26.020 CC lib/ftl/ftl_core.o 00:08:26.020 CC lib/ublk/ublk.o 00:08:26.020 CC lib/scsi/lun.o 00:08:26.280 LIB libspdk_blobfs.a 00:08:26.280 SO libspdk_blobfs.so.10.0 00:08:26.280 LIB libspdk_lvol.a 00:08:26.280 CC lib/ftl/ftl_init.o 00:08:26.280 SO libspdk_lvol.so.10.0 00:08:26.280 CC lib/nbd/nbd_rpc.o 00:08:26.280 SYMLINK libspdk_blobfs.so 00:08:26.280 CC lib/ftl/ftl_layout.o 00:08:26.280 CC lib/ftl/ftl_debug.o 00:08:26.280 SYMLINK libspdk_lvol.so 00:08:26.280 CC lib/ftl/ftl_io.o 00:08:26.542 CC lib/scsi/port.o 00:08:26.542 LIB libspdk_nbd.a 00:08:26.542 CC lib/scsi/scsi.o 00:08:26.542 SO libspdk_nbd.so.7.0 00:08:26.542 CC lib/ublk/ublk_rpc.o 00:08:26.542 CC lib/nvmf/nvmf.o 00:08:26.542 SYMLINK libspdk_nbd.so 00:08:26.542 CC lib/ftl/ftl_sb.o 00:08:26.542 CC lib/ftl/ftl_l2p.o 00:08:26.542 CC lib/scsi/scsi_bdev.o 00:08:26.542 CC lib/scsi/scsi_pr.o 00:08:26.542 CC lib/scsi/scsi_rpc.o 00:08:26.806 CC lib/nvmf/nvmf_rpc.o 00:08:26.806 LIB libspdk_ublk.a 00:08:26.806 CC lib/ftl/ftl_l2p_flat.o 00:08:26.806 SO libspdk_ublk.so.3.0 00:08:26.806 CC lib/ftl/ftl_nv_cache.o 00:08:26.806 CC lib/ftl/ftl_band.o 00:08:26.806 SYMLINK libspdk_ublk.so 00:08:26.806 CC lib/ftl/ftl_band_ops.o 00:08:27.071 CC lib/scsi/task.o 00:08:27.071 CC lib/nvmf/transport.o 00:08:27.071 CC lib/ftl/ftl_writer.o 00:08:27.071 CC lib/nvmf/tcp.o 00:08:27.332 CC lib/ftl/ftl_rq.o 00:08:27.332 LIB libspdk_scsi.a 00:08:27.332 CC lib/ftl/ftl_reloc.o 00:08:27.332 SO libspdk_scsi.so.9.0 00:08:27.332 SYMLINK libspdk_scsi.so 00:08:27.332 CC lib/ftl/ftl_l2p_cache.o 00:08:27.332 CC lib/ftl/ftl_p2l.o 00:08:27.332 CC lib/ftl/ftl_p2l_log.o 00:08:27.590 CC lib/nvmf/stubs.o 00:08:27.590 CC lib/nvmf/mdns_server.o 00:08:27.590 CC lib/iscsi/conn.o 00:08:27.849 CC lib/iscsi/init_grp.o 00:08:27.849 CC lib/iscsi/iscsi.o 00:08:27.849 CC lib/ftl/mngt/ftl_mngt.o 00:08:27.849 CC lib/nvmf/rdma.o 00:08:27.849 CC lib/vhost/vhost.o 00:08:28.107 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:08:28.107 CC lib/iscsi/param.o 00:08:28.107 CC lib/iscsi/portal_grp.o 00:08:28.107 CC lib/iscsi/tgt_node.o 00:08:28.107 CC lib/iscsi/iscsi_subsystem.o 00:08:28.367 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:08:28.367 CC lib/ftl/mngt/ftl_mngt_startup.o 00:08:28.367 CC lib/ftl/mngt/ftl_mngt_md.o 00:08:28.367 CC lib/iscsi/iscsi_rpc.o 00:08:28.367 CC lib/ftl/mngt/ftl_mngt_misc.o 00:08:28.626 CC lib/iscsi/task.o 00:08:28.626 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:08:28.626 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:08:28.885 CC lib/ftl/mngt/ftl_mngt_band.o 00:08:28.885 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:08:28.885 CC lib/vhost/vhost_rpc.o 00:08:28.885 CC lib/nvmf/auth.o 00:08:28.885 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:08:28.885 CC lib/vhost/vhost_scsi.o 00:08:28.885 CC lib/vhost/vhost_blk.o 00:08:28.885 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:08:29.144 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:08:29.144 CC lib/vhost/rte_vhost_user.o 00:08:29.144 CC lib/ftl/utils/ftl_conf.o 00:08:29.144 CC lib/ftl/utils/ftl_md.o 00:08:29.403 CC lib/ftl/utils/ftl_mempool.o 00:08:29.403 CC lib/ftl/utils/ftl_bitmap.o 00:08:29.403 LIB libspdk_iscsi.a 00:08:29.403 CC lib/ftl/utils/ftl_property.o 00:08:29.403 SO libspdk_iscsi.so.8.0 00:08:29.662 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:08:29.662 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:08:29.662 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:08:29.662 SYMLINK libspdk_iscsi.so 00:08:29.662 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:08:29.662 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:08:29.662 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:08:29.921 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:08:29.921 CC lib/ftl/upgrade/ftl_sb_v3.o 00:08:29.921 CC lib/ftl/upgrade/ftl_sb_v5.o 00:08:29.921 CC lib/ftl/nvc/ftl_nvc_dev.o 00:08:29.921 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:08:29.921 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:08:29.921 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:08:29.921 CC lib/ftl/base/ftl_base_dev.o 00:08:29.921 CC lib/ftl/base/ftl_base_bdev.o 00:08:29.921 CC lib/ftl/ftl_trace.o 00:08:30.180 LIB libspdk_vhost.a 00:08:30.180 LIB libspdk_ftl.a 00:08:30.180 SO libspdk_vhost.so.8.0 00:08:30.439 LIB libspdk_nvmf.a 00:08:30.439 SYMLINK libspdk_vhost.so 00:08:30.698 SO libspdk_ftl.so.9.0 00:08:30.698 SO libspdk_nvmf.so.20.0 00:08:30.956 SYMLINK libspdk_nvmf.so 00:08:30.956 SYMLINK libspdk_ftl.so 00:08:31.215 CC module/env_dpdk/env_dpdk_rpc.o 00:08:31.474 CC module/accel/error/accel_error.o 00:08:31.474 CC module/keyring/linux/keyring.o 00:08:31.474 CC module/keyring/file/keyring.o 00:08:31.474 CC module/blob/bdev/blob_bdev.o 00:08:31.474 CC module/scheduler/dynamic/scheduler_dynamic.o 00:08:31.474 CC module/accel/ioat/accel_ioat.o 00:08:31.474 CC module/fsdev/aio/fsdev_aio.o 00:08:31.474 CC module/accel/dsa/accel_dsa.o 00:08:31.474 CC module/sock/posix/posix.o 00:08:31.474 LIB libspdk_env_dpdk_rpc.a 00:08:31.474 SO libspdk_env_dpdk_rpc.so.6.0 00:08:31.474 SYMLINK libspdk_env_dpdk_rpc.so 00:08:31.474 CC module/accel/ioat/accel_ioat_rpc.o 00:08:31.474 CC module/keyring/file/keyring_rpc.o 00:08:31.474 CC module/keyring/linux/keyring_rpc.o 00:08:31.474 CC module/accel/dsa/accel_dsa_rpc.o 00:08:31.474 CC module/accel/error/accel_error_rpc.o 00:08:31.474 LIB libspdk_scheduler_dynamic.a 00:08:31.474 SO libspdk_scheduler_dynamic.so.4.0 00:08:31.733 LIB libspdk_accel_ioat.a 00:08:31.733 LIB libspdk_keyring_linux.a 00:08:31.733 SO libspdk_accel_ioat.so.6.0 00:08:31.733 LIB libspdk_keyring_file.a 00:08:31.733 LIB libspdk_blob_bdev.a 00:08:31.733 SYMLINK libspdk_scheduler_dynamic.so 00:08:31.733 SO libspdk_keyring_linux.so.1.0 00:08:31.733 SO libspdk_keyring_file.so.2.0 00:08:31.733 SO libspdk_blob_bdev.so.11.0 00:08:31.733 LIB libspdk_accel_dsa.a 00:08:31.733 SYMLINK libspdk_accel_ioat.so 00:08:31.733 LIB libspdk_accel_error.a 00:08:31.733 SO libspdk_accel_dsa.so.5.0 00:08:31.733 SO libspdk_accel_error.so.2.0 00:08:31.733 SYMLINK libspdk_blob_bdev.so 00:08:31.733 SYMLINK libspdk_keyring_file.so 00:08:31.733 CC module/fsdev/aio/fsdev_aio_rpc.o 00:08:31.733 CC module/fsdev/aio/linux_aio_mgr.o 00:08:31.733 SYMLINK libspdk_keyring_linux.so 00:08:31.733 SYMLINK libspdk_accel_dsa.so 00:08:31.733 SYMLINK libspdk_accel_error.so 00:08:31.733 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:08:31.733 CC module/accel/iaa/accel_iaa.o 00:08:31.733 CC module/accel/iaa/accel_iaa_rpc.o 00:08:31.992 CC module/scheduler/gscheduler/gscheduler.o 00:08:31.992 LIB libspdk_scheduler_dpdk_governor.a 00:08:31.992 SO libspdk_scheduler_dpdk_governor.so.4.0 00:08:31.992 CC module/bdev/delay/vbdev_delay.o 00:08:31.992 CC module/blobfs/bdev/blobfs_bdev.o 00:08:31.992 LIB libspdk_scheduler_gscheduler.a 00:08:31.992 LIB libspdk_accel_iaa.a 00:08:31.992 SO libspdk_scheduler_gscheduler.so.4.0 00:08:31.992 LIB libspdk_fsdev_aio.a 00:08:31.992 SYMLINK libspdk_scheduler_dpdk_governor.so 00:08:31.992 SO libspdk_accel_iaa.so.3.0 00:08:31.992 CC module/bdev/delay/vbdev_delay_rpc.o 00:08:31.992 CC module/bdev/error/vbdev_error.o 00:08:32.250 SO libspdk_fsdev_aio.so.1.0 00:08:32.250 SYMLINK libspdk_scheduler_gscheduler.so 00:08:32.250 SYMLINK libspdk_accel_iaa.so 00:08:32.250 LIB libspdk_sock_posix.a 00:08:32.250 CC module/bdev/gpt/gpt.o 00:08:32.250 CC module/bdev/lvol/vbdev_lvol.o 00:08:32.250 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:08:32.250 SYMLINK libspdk_fsdev_aio.so 00:08:32.250 CC module/bdev/error/vbdev_error_rpc.o 00:08:32.250 SO libspdk_sock_posix.so.6.0 00:08:32.250 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:08:32.250 SYMLINK libspdk_sock_posix.so 00:08:32.250 CC module/bdev/gpt/vbdev_gpt.o 00:08:32.544 CC module/bdev/malloc/bdev_malloc.o 00:08:32.544 CC module/bdev/null/bdev_null.o 00:08:32.544 CC module/bdev/malloc/bdev_malloc_rpc.o 00:08:32.544 LIB libspdk_blobfs_bdev.a 00:08:32.544 LIB libspdk_bdev_error.a 00:08:32.544 LIB libspdk_bdev_delay.a 00:08:32.544 SO libspdk_bdev_error.so.6.0 00:08:32.544 SO libspdk_blobfs_bdev.so.6.0 00:08:32.544 SO libspdk_bdev_delay.so.6.0 00:08:32.544 SYMLINK libspdk_bdev_error.so 00:08:32.544 SYMLINK libspdk_blobfs_bdev.so 00:08:32.544 CC module/bdev/null/bdev_null_rpc.o 00:08:32.544 SYMLINK libspdk_bdev_delay.so 00:08:32.544 CC module/bdev/nvme/bdev_nvme.o 00:08:32.803 LIB libspdk_bdev_gpt.a 00:08:32.803 SO libspdk_bdev_gpt.so.6.0 00:08:32.803 CC module/bdev/passthru/vbdev_passthru.o 00:08:32.803 LIB libspdk_bdev_null.a 00:08:32.803 CC module/bdev/raid/bdev_raid.o 00:08:32.803 SO libspdk_bdev_null.so.6.0 00:08:32.803 CC module/bdev/split/vbdev_split.o 00:08:32.803 LIB libspdk_bdev_lvol.a 00:08:32.803 LIB libspdk_bdev_malloc.a 00:08:32.803 SYMLINK libspdk_bdev_gpt.so 00:08:32.803 CC module/bdev/split/vbdev_split_rpc.o 00:08:32.803 SO libspdk_bdev_lvol.so.6.0 00:08:32.803 SO libspdk_bdev_malloc.so.6.0 00:08:32.803 SYMLINK libspdk_bdev_null.so 00:08:32.803 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:08:32.803 CC module/bdev/zone_block/vbdev_zone_block.o 00:08:32.803 CC module/bdev/aio/bdev_aio.o 00:08:32.803 SYMLINK libspdk_bdev_lvol.so 00:08:32.803 CC module/bdev/aio/bdev_aio_rpc.o 00:08:32.803 SYMLINK libspdk_bdev_malloc.so 00:08:32.803 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:08:33.060 CC module/bdev/raid/bdev_raid_rpc.o 00:08:33.060 LIB libspdk_bdev_split.a 00:08:33.060 CC module/bdev/raid/bdev_raid_sb.o 00:08:33.060 LIB libspdk_bdev_passthru.a 00:08:33.060 SO libspdk_bdev_split.so.6.0 00:08:33.060 SO libspdk_bdev_passthru.so.6.0 00:08:33.060 CC module/bdev/nvme/bdev_nvme_rpc.o 00:08:33.060 SYMLINK libspdk_bdev_split.so 00:08:33.060 CC module/bdev/nvme/nvme_rpc.o 00:08:33.060 CC module/bdev/raid/raid0.o 00:08:33.060 SYMLINK libspdk_bdev_passthru.so 00:08:33.060 LIB libspdk_bdev_zone_block.a 00:08:33.318 LIB libspdk_bdev_aio.a 00:08:33.318 SO libspdk_bdev_zone_block.so.6.0 00:08:33.318 CC module/bdev/ftl/bdev_ftl.o 00:08:33.318 SO libspdk_bdev_aio.so.6.0 00:08:33.318 CC module/bdev/ftl/bdev_ftl_rpc.o 00:08:33.318 SYMLINK libspdk_bdev_zone_block.so 00:08:33.318 CC module/bdev/nvme/bdev_mdns_client.o 00:08:33.318 SYMLINK libspdk_bdev_aio.so 00:08:33.318 CC module/bdev/raid/raid1.o 00:08:33.318 CC module/bdev/nvme/vbdev_opal.o 00:08:33.319 CC module/bdev/iscsi/bdev_iscsi.o 00:08:33.577 CC module/bdev/virtio/bdev_virtio_scsi.o 00:08:33.577 CC module/bdev/virtio/bdev_virtio_blk.o 00:08:33.577 CC module/bdev/virtio/bdev_virtio_rpc.o 00:08:33.577 LIB libspdk_bdev_ftl.a 00:08:33.577 SO libspdk_bdev_ftl.so.6.0 00:08:33.577 CC module/bdev/raid/concat.o 00:08:33.577 SYMLINK libspdk_bdev_ftl.so 00:08:33.577 CC module/bdev/raid/raid5f.o 00:08:33.577 CC module/bdev/nvme/vbdev_opal_rpc.o 00:08:33.577 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:08:33.577 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:08:33.835 LIB libspdk_bdev_iscsi.a 00:08:33.835 SO libspdk_bdev_iscsi.so.6.0 00:08:34.094 SYMLINK libspdk_bdev_iscsi.so 00:08:34.094 LIB libspdk_bdev_virtio.a 00:08:34.094 SO libspdk_bdev_virtio.so.6.0 00:08:34.094 LIB libspdk_bdev_raid.a 00:08:34.094 SYMLINK libspdk_bdev_virtio.so 00:08:34.352 SO libspdk_bdev_raid.so.6.0 00:08:34.352 SYMLINK libspdk_bdev_raid.so 00:08:35.288 LIB libspdk_bdev_nvme.a 00:08:35.548 SO libspdk_bdev_nvme.so.7.1 00:08:35.548 SYMLINK libspdk_bdev_nvme.so 00:08:36.115 CC module/event/subsystems/vmd/vmd.o 00:08:36.116 CC module/event/subsystems/vmd/vmd_rpc.o 00:08:36.116 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:08:36.116 CC module/event/subsystems/scheduler/scheduler.o 00:08:36.116 CC module/event/subsystems/keyring/keyring.o 00:08:36.116 CC module/event/subsystems/fsdev/fsdev.o 00:08:36.116 CC module/event/subsystems/sock/sock.o 00:08:36.116 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:08:36.116 CC module/event/subsystems/iobuf/iobuf.o 00:08:36.375 LIB libspdk_event_keyring.a 00:08:36.375 LIB libspdk_event_vmd.a 00:08:36.375 LIB libspdk_event_scheduler.a 00:08:36.375 LIB libspdk_event_vhost_blk.a 00:08:36.375 LIB libspdk_event_fsdev.a 00:08:36.375 LIB libspdk_event_sock.a 00:08:36.375 SO libspdk_event_keyring.so.1.0 00:08:36.375 SO libspdk_event_vmd.so.6.0 00:08:36.375 SO libspdk_event_fsdev.so.1.0 00:08:36.375 SO libspdk_event_scheduler.so.4.0 00:08:36.375 SO libspdk_event_vhost_blk.so.3.0 00:08:36.375 SO libspdk_event_sock.so.5.0 00:08:36.375 LIB libspdk_event_iobuf.a 00:08:36.375 SYMLINK libspdk_event_keyring.so 00:08:36.375 SYMLINK libspdk_event_scheduler.so 00:08:36.375 SYMLINK libspdk_event_vmd.so 00:08:36.375 SYMLINK libspdk_event_fsdev.so 00:08:36.375 SYMLINK libspdk_event_vhost_blk.so 00:08:36.375 SO libspdk_event_iobuf.so.3.0 00:08:36.375 SYMLINK libspdk_event_sock.so 00:08:36.375 SYMLINK libspdk_event_iobuf.so 00:08:36.943 CC module/event/subsystems/accel/accel.o 00:08:36.943 LIB libspdk_event_accel.a 00:08:37.202 SO libspdk_event_accel.so.6.0 00:08:37.202 SYMLINK libspdk_event_accel.so 00:08:37.461 CC module/event/subsystems/bdev/bdev.o 00:08:37.721 LIB libspdk_event_bdev.a 00:08:37.721 SO libspdk_event_bdev.so.6.0 00:08:37.981 SYMLINK libspdk_event_bdev.so 00:08:38.239 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:08:38.239 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:08:38.239 CC module/event/subsystems/ublk/ublk.o 00:08:38.239 CC module/event/subsystems/nbd/nbd.o 00:08:38.239 CC module/event/subsystems/scsi/scsi.o 00:08:38.239 LIB libspdk_event_ublk.a 00:08:38.239 LIB libspdk_event_nbd.a 00:08:38.239 LIB libspdk_event_scsi.a 00:08:38.239 SO libspdk_event_ublk.so.3.0 00:08:38.499 SO libspdk_event_nbd.so.6.0 00:08:38.499 SO libspdk_event_scsi.so.6.0 00:08:38.499 LIB libspdk_event_nvmf.a 00:08:38.499 SYMLINK libspdk_event_ublk.so 00:08:38.499 SYMLINK libspdk_event_nbd.so 00:08:38.499 SO libspdk_event_nvmf.so.6.0 00:08:38.499 SYMLINK libspdk_event_scsi.so 00:08:38.499 SYMLINK libspdk_event_nvmf.so 00:08:38.758 CC module/event/subsystems/iscsi/iscsi.o 00:08:38.758 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:08:39.016 LIB libspdk_event_vhost_scsi.a 00:08:39.016 LIB libspdk_event_iscsi.a 00:08:39.016 SO libspdk_event_vhost_scsi.so.3.0 00:08:39.016 SO libspdk_event_iscsi.so.6.0 00:08:39.016 SYMLINK libspdk_event_iscsi.so 00:08:39.016 SYMLINK libspdk_event_vhost_scsi.so 00:08:39.274 SO libspdk.so.6.0 00:08:39.274 SYMLINK libspdk.so 00:08:39.532 CXX app/trace/trace.o 00:08:39.532 CC app/spdk_lspci/spdk_lspci.o 00:08:39.532 CC app/trace_record/trace_record.o 00:08:39.790 CC app/iscsi_tgt/iscsi_tgt.o 00:08:39.790 CC examples/interrupt_tgt/interrupt_tgt.o 00:08:39.790 CC app/nvmf_tgt/nvmf_main.o 00:08:39.790 CC examples/util/zipf/zipf.o 00:08:39.790 CC app/spdk_tgt/spdk_tgt.o 00:08:39.790 CC test/thread/poller_perf/poller_perf.o 00:08:39.790 CC examples/ioat/perf/perf.o 00:08:39.790 LINK spdk_lspci 00:08:39.790 LINK interrupt_tgt 00:08:39.790 LINK nvmf_tgt 00:08:39.790 LINK poller_perf 00:08:39.790 LINK zipf 00:08:39.790 LINK iscsi_tgt 00:08:40.049 LINK spdk_tgt 00:08:40.049 LINK spdk_trace_record 00:08:40.049 CC app/spdk_nvme_perf/perf.o 00:08:40.049 LINK ioat_perf 00:08:40.049 LINK spdk_trace 00:08:40.049 CC app/spdk_nvme_identify/identify.o 00:08:40.049 CC examples/ioat/verify/verify.o 00:08:40.049 CC app/spdk_nvme_discover/discovery_aer.o 00:08:40.308 CC app/spdk_top/spdk_top.o 00:08:40.308 CC test/dma/test_dma/test_dma.o 00:08:40.308 CC test/app/bdev_svc/bdev_svc.o 00:08:40.308 CC examples/sock/hello_world/hello_sock.o 00:08:40.308 CC examples/thread/thread/thread_ex.o 00:08:40.308 CC examples/vmd/lsvmd/lsvmd.o 00:08:40.308 LINK spdk_nvme_discover 00:08:40.308 LINK verify 00:08:40.308 LINK bdev_svc 00:08:40.566 LINK lsvmd 00:08:40.566 LINK hello_sock 00:08:40.566 LINK thread 00:08:40.824 CC examples/idxd/perf/perf.o 00:08:40.824 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:08:40.824 LINK test_dma 00:08:40.824 CC app/spdk_dd/spdk_dd.o 00:08:40.824 CC examples/vmd/led/led.o 00:08:40.824 TEST_HEADER include/spdk/accel.h 00:08:40.824 TEST_HEADER include/spdk/accel_module.h 00:08:40.824 TEST_HEADER include/spdk/assert.h 00:08:40.824 TEST_HEADER include/spdk/barrier.h 00:08:40.824 TEST_HEADER include/spdk/base64.h 00:08:40.824 TEST_HEADER include/spdk/bdev.h 00:08:40.824 TEST_HEADER include/spdk/bdev_module.h 00:08:40.824 TEST_HEADER include/spdk/bdev_zone.h 00:08:40.824 TEST_HEADER include/spdk/bit_array.h 00:08:40.824 TEST_HEADER include/spdk/bit_pool.h 00:08:40.824 TEST_HEADER include/spdk/blob_bdev.h 00:08:40.824 TEST_HEADER include/spdk/blobfs_bdev.h 00:08:40.824 TEST_HEADER include/spdk/blobfs.h 00:08:40.824 TEST_HEADER include/spdk/blob.h 00:08:40.824 TEST_HEADER include/spdk/conf.h 00:08:40.824 TEST_HEADER include/spdk/config.h 00:08:40.824 TEST_HEADER include/spdk/cpuset.h 00:08:40.824 TEST_HEADER include/spdk/crc16.h 00:08:40.824 TEST_HEADER include/spdk/crc32.h 00:08:40.824 TEST_HEADER include/spdk/crc64.h 00:08:40.824 TEST_HEADER include/spdk/dif.h 00:08:40.824 TEST_HEADER include/spdk/dma.h 00:08:40.824 TEST_HEADER include/spdk/endian.h 00:08:40.824 CC app/fio/nvme/fio_plugin.o 00:08:40.824 TEST_HEADER include/spdk/env_dpdk.h 00:08:40.824 TEST_HEADER include/spdk/env.h 00:08:40.824 TEST_HEADER include/spdk/event.h 00:08:40.824 TEST_HEADER include/spdk/fd_group.h 00:08:40.824 LINK spdk_nvme_perf 00:08:40.824 TEST_HEADER include/spdk/fd.h 00:08:40.824 TEST_HEADER include/spdk/file.h 00:08:40.824 TEST_HEADER include/spdk/fsdev.h 00:08:40.824 TEST_HEADER include/spdk/fsdev_module.h 00:08:40.824 TEST_HEADER include/spdk/ftl.h 00:08:40.824 TEST_HEADER include/spdk/fuse_dispatcher.h 00:08:40.824 TEST_HEADER include/spdk/gpt_spec.h 00:08:40.824 TEST_HEADER include/spdk/hexlify.h 00:08:40.824 TEST_HEADER include/spdk/histogram_data.h 00:08:40.824 TEST_HEADER include/spdk/idxd.h 00:08:40.824 TEST_HEADER include/spdk/idxd_spec.h 00:08:41.083 TEST_HEADER include/spdk/init.h 00:08:41.083 LINK led 00:08:41.083 TEST_HEADER include/spdk/ioat.h 00:08:41.083 TEST_HEADER include/spdk/ioat_spec.h 00:08:41.083 TEST_HEADER include/spdk/iscsi_spec.h 00:08:41.083 TEST_HEADER include/spdk/json.h 00:08:41.083 TEST_HEADER include/spdk/jsonrpc.h 00:08:41.083 TEST_HEADER include/spdk/keyring.h 00:08:41.083 TEST_HEADER include/spdk/keyring_module.h 00:08:41.083 TEST_HEADER include/spdk/likely.h 00:08:41.083 TEST_HEADER include/spdk/log.h 00:08:41.083 TEST_HEADER include/spdk/lvol.h 00:08:41.083 TEST_HEADER include/spdk/md5.h 00:08:41.083 TEST_HEADER include/spdk/memory.h 00:08:41.083 TEST_HEADER include/spdk/mmio.h 00:08:41.083 TEST_HEADER include/spdk/nbd.h 00:08:41.083 TEST_HEADER include/spdk/net.h 00:08:41.083 TEST_HEADER include/spdk/notify.h 00:08:41.083 TEST_HEADER include/spdk/nvme.h 00:08:41.083 TEST_HEADER include/spdk/nvme_intel.h 00:08:41.083 TEST_HEADER include/spdk/nvme_ocssd.h 00:08:41.083 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:08:41.083 TEST_HEADER include/spdk/nvme_spec.h 00:08:41.083 TEST_HEADER include/spdk/nvme_zns.h 00:08:41.083 TEST_HEADER include/spdk/nvmf_cmd.h 00:08:41.083 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:08:41.083 TEST_HEADER include/spdk/nvmf.h 00:08:41.083 TEST_HEADER include/spdk/nvmf_spec.h 00:08:41.083 TEST_HEADER include/spdk/nvmf_transport.h 00:08:41.083 TEST_HEADER include/spdk/opal.h 00:08:41.083 TEST_HEADER include/spdk/opal_spec.h 00:08:41.083 TEST_HEADER include/spdk/pci_ids.h 00:08:41.083 TEST_HEADER include/spdk/pipe.h 00:08:41.083 TEST_HEADER include/spdk/queue.h 00:08:41.083 TEST_HEADER include/spdk/reduce.h 00:08:41.083 TEST_HEADER include/spdk/rpc.h 00:08:41.083 TEST_HEADER include/spdk/scheduler.h 00:08:41.083 TEST_HEADER include/spdk/scsi.h 00:08:41.083 TEST_HEADER include/spdk/scsi_spec.h 00:08:41.083 TEST_HEADER include/spdk/sock.h 00:08:41.083 TEST_HEADER include/spdk/stdinc.h 00:08:41.083 TEST_HEADER include/spdk/string.h 00:08:41.083 TEST_HEADER include/spdk/thread.h 00:08:41.083 TEST_HEADER include/spdk/trace.h 00:08:41.083 TEST_HEADER include/spdk/trace_parser.h 00:08:41.083 TEST_HEADER include/spdk/tree.h 00:08:41.083 TEST_HEADER include/spdk/ublk.h 00:08:41.083 TEST_HEADER include/spdk/util.h 00:08:41.083 TEST_HEADER include/spdk/uuid.h 00:08:41.083 TEST_HEADER include/spdk/version.h 00:08:41.083 CC app/fio/bdev/fio_plugin.o 00:08:41.083 TEST_HEADER include/spdk/vfio_user_pci.h 00:08:41.083 TEST_HEADER include/spdk/vfio_user_spec.h 00:08:41.083 TEST_HEADER include/spdk/vhost.h 00:08:41.083 LINK idxd_perf 00:08:41.083 TEST_HEADER include/spdk/vmd.h 00:08:41.083 TEST_HEADER include/spdk/xor.h 00:08:41.083 TEST_HEADER include/spdk/zipf.h 00:08:41.083 CXX test/cpp_headers/accel.o 00:08:41.083 LINK spdk_nvme_identify 00:08:41.083 LINK spdk_dd 00:08:41.083 LINK nvme_fuzz 00:08:41.083 LINK spdk_top 00:08:41.341 CXX test/cpp_headers/accel_module.o 00:08:41.341 CC test/event/event_perf/event_perf.o 00:08:41.342 CC test/env/vtophys/vtophys.o 00:08:41.342 CC test/env/mem_callbacks/mem_callbacks.o 00:08:41.342 CC examples/nvme/hello_world/hello_world.o 00:08:41.342 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:08:41.342 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:08:41.342 CXX test/cpp_headers/assert.o 00:08:41.342 LINK event_perf 00:08:41.342 CC test/nvme/aer/aer.o 00:08:41.600 LINK vtophys 00:08:41.600 LINK spdk_nvme 00:08:41.600 LINK spdk_bdev 00:08:41.600 LINK env_dpdk_post_init 00:08:41.600 CXX test/cpp_headers/barrier.o 00:08:41.600 LINK hello_world 00:08:41.600 CC test/event/reactor/reactor.o 00:08:41.600 CC test/nvme/reset/reset.o 00:08:41.600 CC test/nvme/sgl/sgl.o 00:08:41.858 CXX test/cpp_headers/base64.o 00:08:41.858 LINK aer 00:08:41.858 CC test/nvme/e2edp/nvme_dp.o 00:08:41.858 CC app/vhost/vhost.o 00:08:41.858 LINK reactor 00:08:41.858 CC examples/nvme/reconnect/reconnect.o 00:08:41.858 LINK mem_callbacks 00:08:41.858 CXX test/cpp_headers/bdev.o 00:08:41.858 LINK reset 00:08:41.858 LINK sgl 00:08:42.118 LINK vhost 00:08:42.118 CC test/nvme/overhead/overhead.o 00:08:42.118 CXX test/cpp_headers/bdev_module.o 00:08:42.118 CC test/event/reactor_perf/reactor_perf.o 00:08:42.118 LINK nvme_dp 00:08:42.118 CC test/env/memory/memory_ut.o 00:08:42.118 CXX test/cpp_headers/bdev_zone.o 00:08:42.118 LINK reconnect 00:08:42.118 LINK reactor_perf 00:08:42.377 CXX test/cpp_headers/bit_array.o 00:08:42.377 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:08:42.377 CC examples/fsdev/hello_world/hello_fsdev.o 00:08:42.377 LINK overhead 00:08:42.377 CC test/nvme/err_injection/err_injection.o 00:08:42.377 CXX test/cpp_headers/bit_pool.o 00:08:42.377 CC test/nvme/startup/startup.o 00:08:42.377 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:08:42.377 CC test/event/app_repeat/app_repeat.o 00:08:42.377 CC examples/nvme/nvme_manage/nvme_manage.o 00:08:42.636 CXX test/cpp_headers/blob_bdev.o 00:08:42.636 LINK err_injection 00:08:42.636 LINK app_repeat 00:08:42.636 LINK startup 00:08:42.636 LINK hello_fsdev 00:08:42.636 CC test/nvme/reserve/reserve.o 00:08:42.636 CXX test/cpp_headers/blobfs_bdev.o 00:08:42.636 CXX test/cpp_headers/blobfs.o 00:08:42.896 CXX test/cpp_headers/blob.o 00:08:42.896 CXX test/cpp_headers/conf.o 00:08:42.896 LINK reserve 00:08:42.896 CC test/event/scheduler/scheduler.o 00:08:42.896 LINK vhost_fuzz 00:08:42.896 CC test/env/pci/pci_ut.o 00:08:43.154 CXX test/cpp_headers/config.o 00:08:43.154 LINK nvme_manage 00:08:43.154 CXX test/cpp_headers/cpuset.o 00:08:43.154 CC test/nvme/simple_copy/simple_copy.o 00:08:43.154 CC examples/accel/perf/accel_perf.o 00:08:43.154 CC test/rpc_client/rpc_client_test.o 00:08:43.154 LINK scheduler 00:08:43.154 CXX test/cpp_headers/crc16.o 00:08:43.154 LINK iscsi_fuzz 00:08:43.154 CC examples/nvme/arbitration/arbitration.o 00:08:43.413 CC test/accel/dif/dif.o 00:08:43.413 LINK memory_ut 00:08:43.413 LINK simple_copy 00:08:43.413 LINK rpc_client_test 00:08:43.413 CXX test/cpp_headers/crc32.o 00:08:43.413 CC test/app/histogram_perf/histogram_perf.o 00:08:43.413 LINK pci_ut 00:08:43.413 CC test/app/jsoncat/jsoncat.o 00:08:43.413 CXX test/cpp_headers/crc64.o 00:08:43.671 CC test/nvme/connect_stress/connect_stress.o 00:08:43.671 CC test/app/stub/stub.o 00:08:43.671 LINK histogram_perf 00:08:43.671 LINK accel_perf 00:08:43.671 LINK arbitration 00:08:43.671 LINK jsoncat 00:08:43.671 CXX test/cpp_headers/dif.o 00:08:43.671 CC test/blobfs/mkfs/mkfs.o 00:08:43.671 LINK connect_stress 00:08:43.671 LINK stub 00:08:43.929 CC test/nvme/boot_partition/boot_partition.o 00:08:43.929 CC examples/nvme/hotplug/hotplug.o 00:08:43.929 CXX test/cpp_headers/dma.o 00:08:43.929 CC examples/nvme/cmb_copy/cmb_copy.o 00:08:43.929 CC examples/nvme/abort/abort.o 00:08:43.929 LINK mkfs 00:08:43.929 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:08:43.929 LINK boot_partition 00:08:43.929 CXX test/cpp_headers/endian.o 00:08:43.929 CC test/nvme/compliance/nvme_compliance.o 00:08:43.929 LINK dif 00:08:44.188 LINK cmb_copy 00:08:44.188 LINK hotplug 00:08:44.188 LINK pmr_persistence 00:08:44.188 CXX test/cpp_headers/env_dpdk.o 00:08:44.188 CC test/lvol/esnap/esnap.o 00:08:44.188 CC examples/blob/cli/blobcli.o 00:08:44.188 CC examples/blob/hello_world/hello_blob.o 00:08:44.447 CXX test/cpp_headers/env.o 00:08:44.447 CC test/nvme/fused_ordering/fused_ordering.o 00:08:44.447 LINK abort 00:08:44.447 CC test/nvme/fdp/fdp.o 00:08:44.447 CC test/nvme/doorbell_aers/doorbell_aers.o 00:08:44.447 CC test/nvme/cuse/cuse.o 00:08:44.447 LINK nvme_compliance 00:08:44.447 CXX test/cpp_headers/event.o 00:08:44.447 CXX test/cpp_headers/fd_group.o 00:08:44.447 LINK hello_blob 00:08:44.447 LINK fused_ordering 00:08:44.447 LINK doorbell_aers 00:08:44.706 CXX test/cpp_headers/fd.o 00:08:44.706 LINK fdp 00:08:44.706 CXX test/cpp_headers/file.o 00:08:44.706 CXX test/cpp_headers/fsdev.o 00:08:44.706 CXX test/cpp_headers/fsdev_module.o 00:08:44.706 CXX test/cpp_headers/ftl.o 00:08:44.706 LINK blobcli 00:08:44.965 CC examples/bdev/hello_world/hello_bdev.o 00:08:44.965 CC test/bdev/bdevio/bdevio.o 00:08:44.965 CXX test/cpp_headers/fuse_dispatcher.o 00:08:44.965 CXX test/cpp_headers/gpt_spec.o 00:08:44.965 CXX test/cpp_headers/hexlify.o 00:08:44.965 CXX test/cpp_headers/histogram_data.o 00:08:44.965 CC examples/bdev/bdevperf/bdevperf.o 00:08:44.965 CXX test/cpp_headers/idxd.o 00:08:44.965 CXX test/cpp_headers/idxd_spec.o 00:08:44.965 CXX test/cpp_headers/init.o 00:08:44.965 CXX test/cpp_headers/ioat.o 00:08:44.965 LINK hello_bdev 00:08:44.965 CXX test/cpp_headers/ioat_spec.o 00:08:45.224 CXX test/cpp_headers/iscsi_spec.o 00:08:45.224 CXX test/cpp_headers/json.o 00:08:45.224 CXX test/cpp_headers/jsonrpc.o 00:08:45.224 CXX test/cpp_headers/keyring.o 00:08:45.224 CXX test/cpp_headers/keyring_module.o 00:08:45.224 LINK bdevio 00:08:45.224 CXX test/cpp_headers/likely.o 00:08:45.224 CXX test/cpp_headers/log.o 00:08:45.224 CXX test/cpp_headers/lvol.o 00:08:45.224 CXX test/cpp_headers/md5.o 00:08:45.483 CXX test/cpp_headers/memory.o 00:08:45.483 CXX test/cpp_headers/mmio.o 00:08:45.483 CXX test/cpp_headers/nbd.o 00:08:45.483 CXX test/cpp_headers/net.o 00:08:45.483 CXX test/cpp_headers/notify.o 00:08:45.483 CXX test/cpp_headers/nvme.o 00:08:45.483 CXX test/cpp_headers/nvme_intel.o 00:08:45.483 CXX test/cpp_headers/nvme_ocssd.o 00:08:45.483 CXX test/cpp_headers/nvme_ocssd_spec.o 00:08:45.483 CXX test/cpp_headers/nvme_spec.o 00:08:45.483 CXX test/cpp_headers/nvme_zns.o 00:08:45.483 CXX test/cpp_headers/nvmf_cmd.o 00:08:45.742 LINK cuse 00:08:45.742 CXX test/cpp_headers/nvmf_fc_spec.o 00:08:45.742 CXX test/cpp_headers/nvmf.o 00:08:45.742 CXX test/cpp_headers/nvmf_spec.o 00:08:45.742 CXX test/cpp_headers/nvmf_transport.o 00:08:45.742 CXX test/cpp_headers/opal.o 00:08:45.742 CXX test/cpp_headers/opal_spec.o 00:08:45.742 CXX test/cpp_headers/pci_ids.o 00:08:45.742 CXX test/cpp_headers/pipe.o 00:08:45.742 LINK bdevperf 00:08:45.742 CXX test/cpp_headers/queue.o 00:08:45.742 CXX test/cpp_headers/reduce.o 00:08:45.742 CXX test/cpp_headers/rpc.o 00:08:46.000 CXX test/cpp_headers/scheduler.o 00:08:46.000 CXX test/cpp_headers/scsi.o 00:08:46.000 CXX test/cpp_headers/scsi_spec.o 00:08:46.000 CXX test/cpp_headers/sock.o 00:08:46.000 CXX test/cpp_headers/stdinc.o 00:08:46.000 CXX test/cpp_headers/string.o 00:08:46.000 CXX test/cpp_headers/thread.o 00:08:46.000 CXX test/cpp_headers/trace.o 00:08:46.000 CXX test/cpp_headers/trace_parser.o 00:08:46.000 CXX test/cpp_headers/tree.o 00:08:46.000 CXX test/cpp_headers/ublk.o 00:08:46.000 CXX test/cpp_headers/util.o 00:08:46.000 CXX test/cpp_headers/uuid.o 00:08:46.259 CXX test/cpp_headers/version.o 00:08:46.259 CXX test/cpp_headers/vfio_user_pci.o 00:08:46.259 CXX test/cpp_headers/vfio_user_spec.o 00:08:46.259 CXX test/cpp_headers/vhost.o 00:08:46.259 CXX test/cpp_headers/vmd.o 00:08:46.259 CXX test/cpp_headers/xor.o 00:08:46.259 CXX test/cpp_headers/zipf.o 00:08:46.259 CC examples/nvmf/nvmf/nvmf.o 00:08:46.577 LINK nvmf 00:08:49.874 LINK esnap 00:08:50.133 00:08:50.133 real 1m28.448s 00:08:50.133 user 7m40.599s 00:08:50.133 sys 1m52.039s 00:08:50.133 13:28:49 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:08:50.133 13:28:49 make -- common/autotest_common.sh@10 -- $ set +x 00:08:50.133 ************************************ 00:08:50.133 END TEST make 00:08:50.133 ************************************ 00:08:50.393 13:28:49 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:08:50.393 13:28:49 -- pm/common@29 -- $ signal_monitor_resources TERM 00:08:50.393 13:28:49 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:08:50.393 13:28:49 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:08:50.393 13:28:49 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:08:50.393 13:28:49 -- pm/common@44 -- $ pid=5251 00:08:50.393 13:28:49 -- pm/common@50 -- $ kill -TERM 5251 00:08:50.393 13:28:49 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:08:50.393 13:28:49 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:08:50.393 13:28:49 -- pm/common@44 -- $ pid=5252 00:08:50.393 13:28:49 -- pm/common@50 -- $ kill -TERM 5252 00:08:50.393 13:28:49 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:08:50.393 13:28:49 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:08:50.393 13:28:49 -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:50.393 13:28:49 -- common/autotest_common.sh@1693 -- # lcov --version 00:08:50.393 13:28:49 -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:50.393 13:28:49 -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:50.393 13:28:49 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:50.393 13:28:49 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:50.393 13:28:49 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:50.393 13:28:49 -- scripts/common.sh@336 -- # IFS=.-: 00:08:50.393 13:28:49 -- scripts/common.sh@336 -- # read -ra ver1 00:08:50.393 13:28:49 -- scripts/common.sh@337 -- # IFS=.-: 00:08:50.393 13:28:49 -- scripts/common.sh@337 -- # read -ra ver2 00:08:50.393 13:28:49 -- scripts/common.sh@338 -- # local 'op=<' 00:08:50.393 13:28:49 -- scripts/common.sh@340 -- # ver1_l=2 00:08:50.393 13:28:49 -- scripts/common.sh@341 -- # ver2_l=1 00:08:50.393 13:28:49 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:50.393 13:28:49 -- scripts/common.sh@344 -- # case "$op" in 00:08:50.393 13:28:49 -- scripts/common.sh@345 -- # : 1 00:08:50.393 13:28:49 -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:50.393 13:28:49 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:50.393 13:28:49 -- scripts/common.sh@365 -- # decimal 1 00:08:50.393 13:28:49 -- scripts/common.sh@353 -- # local d=1 00:08:50.393 13:28:49 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:50.393 13:28:49 -- scripts/common.sh@355 -- # echo 1 00:08:50.394 13:28:49 -- scripts/common.sh@365 -- # ver1[v]=1 00:08:50.394 13:28:49 -- scripts/common.sh@366 -- # decimal 2 00:08:50.394 13:28:49 -- scripts/common.sh@353 -- # local d=2 00:08:50.394 13:28:49 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:50.394 13:28:49 -- scripts/common.sh@355 -- # echo 2 00:08:50.394 13:28:49 -- scripts/common.sh@366 -- # ver2[v]=2 00:08:50.394 13:28:49 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:50.394 13:28:49 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:50.394 13:28:49 -- scripts/common.sh@368 -- # return 0 00:08:50.394 13:28:49 -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:50.394 13:28:49 -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:50.394 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:50.394 --rc genhtml_branch_coverage=1 00:08:50.394 --rc genhtml_function_coverage=1 00:08:50.394 --rc genhtml_legend=1 00:08:50.394 --rc geninfo_all_blocks=1 00:08:50.394 --rc geninfo_unexecuted_blocks=1 00:08:50.394 00:08:50.394 ' 00:08:50.394 13:28:49 -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:50.394 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:50.394 --rc genhtml_branch_coverage=1 00:08:50.394 --rc genhtml_function_coverage=1 00:08:50.394 --rc genhtml_legend=1 00:08:50.394 --rc geninfo_all_blocks=1 00:08:50.394 --rc geninfo_unexecuted_blocks=1 00:08:50.394 00:08:50.394 ' 00:08:50.394 13:28:49 -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:50.394 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:50.394 --rc genhtml_branch_coverage=1 00:08:50.394 --rc genhtml_function_coverage=1 00:08:50.394 --rc genhtml_legend=1 00:08:50.394 --rc geninfo_all_blocks=1 00:08:50.394 --rc geninfo_unexecuted_blocks=1 00:08:50.394 00:08:50.394 ' 00:08:50.394 13:28:49 -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:50.394 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:50.394 --rc genhtml_branch_coverage=1 00:08:50.394 --rc genhtml_function_coverage=1 00:08:50.394 --rc genhtml_legend=1 00:08:50.394 --rc geninfo_all_blocks=1 00:08:50.394 --rc geninfo_unexecuted_blocks=1 00:08:50.394 00:08:50.394 ' 00:08:50.394 13:28:49 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:50.394 13:28:49 -- nvmf/common.sh@7 -- # uname -s 00:08:50.394 13:28:49 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:50.394 13:28:49 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:50.394 13:28:49 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:50.394 13:28:49 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:50.394 13:28:49 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:50.394 13:28:49 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:50.394 13:28:49 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:50.394 13:28:49 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:50.394 13:28:49 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:50.394 13:28:49 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:50.653 13:28:49 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:38e61dd9-7663-487a-9216-d82314e42e23 00:08:50.653 13:28:49 -- nvmf/common.sh@18 -- # NVME_HOSTID=38e61dd9-7663-487a-9216-d82314e42e23 00:08:50.653 13:28:49 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:50.653 13:28:49 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:50.653 13:28:49 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:08:50.653 13:28:49 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:50.653 13:28:49 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:50.653 13:28:49 -- scripts/common.sh@15 -- # shopt -s extglob 00:08:50.653 13:28:49 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:50.653 13:28:49 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:50.653 13:28:49 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:50.653 13:28:49 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:50.653 13:28:49 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:50.653 13:28:49 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:50.653 13:28:49 -- paths/export.sh@5 -- # export PATH 00:08:50.653 13:28:49 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:50.653 13:28:49 -- nvmf/common.sh@51 -- # : 0 00:08:50.653 13:28:49 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:50.653 13:28:49 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:50.653 13:28:49 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:50.653 13:28:49 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:50.653 13:28:49 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:50.653 13:28:49 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:50.653 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:50.653 13:28:49 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:50.653 13:28:49 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:50.653 13:28:49 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:50.653 13:28:49 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:08:50.653 13:28:49 -- spdk/autotest.sh@32 -- # uname -s 00:08:50.653 13:28:49 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:08:50.653 13:28:49 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:08:50.653 13:28:49 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:08:50.653 13:28:49 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:08:50.653 13:28:49 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:08:50.653 13:28:49 -- spdk/autotest.sh@44 -- # modprobe nbd 00:08:50.653 13:28:49 -- spdk/autotest.sh@46 -- # type -P udevadm 00:08:50.653 13:28:49 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:08:50.653 13:28:49 -- spdk/autotest.sh@48 -- # udevadm_pid=54233 00:08:50.653 13:28:49 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:08:50.653 13:28:49 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:08:50.653 13:28:49 -- pm/common@17 -- # local monitor 00:08:50.653 13:28:49 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:08:50.653 13:28:49 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:08:50.653 13:28:49 -- pm/common@21 -- # date +%s 00:08:50.653 13:28:49 -- pm/common@21 -- # date +%s 00:08:50.653 13:28:49 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732109329 00:08:50.653 13:28:49 -- pm/common@25 -- # sleep 1 00:08:50.653 13:28:49 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732109329 00:08:50.653 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732109329_collect-vmstat.pm.log 00:08:50.653 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732109329_collect-cpu-load.pm.log 00:08:51.589 13:28:50 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:08:51.589 13:28:50 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:08:51.589 13:28:50 -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:51.589 13:28:50 -- common/autotest_common.sh@10 -- # set +x 00:08:51.589 13:28:50 -- spdk/autotest.sh@59 -- # create_test_list 00:08:51.589 13:28:50 -- common/autotest_common.sh@752 -- # xtrace_disable 00:08:51.589 13:28:50 -- common/autotest_common.sh@10 -- # set +x 00:08:51.589 13:28:51 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:08:51.589 13:28:51 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:08:51.589 13:28:51 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:08:51.589 13:28:51 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:08:51.589 13:28:51 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:08:51.589 13:28:51 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:08:51.848 13:28:51 -- common/autotest_common.sh@1457 -- # uname 00:08:51.848 13:28:51 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:08:51.848 13:28:51 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:08:51.848 13:28:51 -- common/autotest_common.sh@1477 -- # uname 00:08:51.848 13:28:51 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:08:51.848 13:28:51 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:08:51.848 13:28:51 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:08:51.848 lcov: LCOV version 1.15 00:08:51.848 13:28:51 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:09:06.731 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:09:06.731 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:09:24.834 13:29:21 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:09:24.834 13:29:21 -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:24.834 13:29:21 -- common/autotest_common.sh@10 -- # set +x 00:09:24.834 13:29:21 -- spdk/autotest.sh@78 -- # rm -f 00:09:24.834 13:29:21 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:24.834 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:24.834 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:09:24.835 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:09:24.835 13:29:22 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:09:24.835 13:29:22 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:09:24.835 13:29:22 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:09:24.835 13:29:22 -- common/autotest_common.sh@1658 -- # local nvme bdf 00:09:24.835 13:29:22 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:09:24.835 13:29:22 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:09:24.835 13:29:22 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:09:24.835 13:29:22 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:09:24.835 13:29:22 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:09:24.835 13:29:22 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:09:24.835 13:29:22 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:09:24.835 13:29:22 -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:09:24.835 13:29:22 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:09:24.835 13:29:22 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:09:24.835 13:29:22 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:09:24.835 13:29:22 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n2 00:09:24.835 13:29:22 -- common/autotest_common.sh@1650 -- # local device=nvme1n2 00:09:24.835 13:29:22 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:09:24.835 13:29:22 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:09:24.835 13:29:22 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:09:24.835 13:29:22 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n3 00:09:24.835 13:29:22 -- common/autotest_common.sh@1650 -- # local device=nvme1n3 00:09:24.835 13:29:22 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:09:24.835 13:29:22 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:09:24.835 13:29:22 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:09:24.835 13:29:22 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:09:24.835 13:29:22 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:09:24.835 13:29:22 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:09:24.835 13:29:22 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:09:24.835 13:29:22 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:09:24.835 No valid GPT data, bailing 00:09:24.835 13:29:22 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:09:24.835 13:29:22 -- scripts/common.sh@394 -- # pt= 00:09:24.835 13:29:22 -- scripts/common.sh@395 -- # return 1 00:09:24.835 13:29:22 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:09:24.835 1+0 records in 00:09:24.835 1+0 records out 00:09:24.835 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00628523 s, 167 MB/s 00:09:24.835 13:29:22 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:09:24.835 13:29:22 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:09:24.835 13:29:22 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:09:24.835 13:29:22 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:09:24.835 13:29:22 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:09:24.835 No valid GPT data, bailing 00:09:24.835 13:29:22 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:09:24.835 13:29:22 -- scripts/common.sh@394 -- # pt= 00:09:24.835 13:29:22 -- scripts/common.sh@395 -- # return 1 00:09:24.835 13:29:22 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:09:24.835 1+0 records in 00:09:24.835 1+0 records out 00:09:24.835 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00627517 s, 167 MB/s 00:09:24.835 13:29:22 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:09:24.835 13:29:22 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:09:24.835 13:29:22 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n2 00:09:24.835 13:29:22 -- scripts/common.sh@381 -- # local block=/dev/nvme1n2 pt 00:09:24.835 13:29:22 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:09:24.835 No valid GPT data, bailing 00:09:24.835 13:29:22 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:09:24.835 13:29:22 -- scripts/common.sh@394 -- # pt= 00:09:24.835 13:29:22 -- scripts/common.sh@395 -- # return 1 00:09:24.835 13:29:22 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:09:24.835 1+0 records in 00:09:24.835 1+0 records out 00:09:24.835 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00627242 s, 167 MB/s 00:09:24.835 13:29:22 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:09:24.835 13:29:22 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:09:24.835 13:29:22 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n3 00:09:24.835 13:29:22 -- scripts/common.sh@381 -- # local block=/dev/nvme1n3 pt 00:09:24.835 13:29:22 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:09:24.835 No valid GPT data, bailing 00:09:24.835 13:29:22 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:09:24.835 13:29:22 -- scripts/common.sh@394 -- # pt= 00:09:24.835 13:29:22 -- scripts/common.sh@395 -- # return 1 00:09:24.835 13:29:22 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:09:24.835 1+0 records in 00:09:24.835 1+0 records out 00:09:24.835 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00568475 s, 184 MB/s 00:09:24.835 13:29:22 -- spdk/autotest.sh@105 -- # sync 00:09:24.835 13:29:22 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:09:24.835 13:29:22 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:09:24.835 13:29:22 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:09:26.251 13:29:25 -- spdk/autotest.sh@111 -- # uname -s 00:09:26.251 13:29:25 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:09:26.251 13:29:25 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:09:26.251 13:29:25 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:09:27.187 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:27.187 Hugepages 00:09:27.187 node hugesize free / total 00:09:27.187 node0 1048576kB 0 / 0 00:09:27.187 node0 2048kB 0 / 0 00:09:27.187 00:09:27.187 Type BDF Vendor Device NUMA Driver Device Block devices 00:09:27.187 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:09:27.187 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:09:27.447 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:09:27.447 13:29:26 -- spdk/autotest.sh@117 -- # uname -s 00:09:27.447 13:29:26 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:09:27.447 13:29:26 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:09:27.447 13:29:26 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:28.382 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:28.382 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:09:28.382 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:09:28.382 13:29:27 -- common/autotest_common.sh@1517 -- # sleep 1 00:09:29.758 13:29:28 -- common/autotest_common.sh@1518 -- # bdfs=() 00:09:29.758 13:29:28 -- common/autotest_common.sh@1518 -- # local bdfs 00:09:29.758 13:29:28 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:09:29.758 13:29:28 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:09:29.758 13:29:28 -- common/autotest_common.sh@1498 -- # bdfs=() 00:09:29.758 13:29:28 -- common/autotest_common.sh@1498 -- # local bdfs 00:09:29.758 13:29:28 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:29.758 13:29:28 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:29.758 13:29:28 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:09:29.758 13:29:28 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:09:29.758 13:29:28 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:09:29.758 13:29:28 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:30.017 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:30.017 Waiting for block devices as requested 00:09:30.276 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:09:30.276 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:09:30.276 13:29:29 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:09:30.276 13:29:29 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:09:30.276 13:29:29 -- common/autotest_common.sh@1487 -- # grep 0000:00:10.0/nvme/nvme 00:09:30.276 13:29:29 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:09:30.276 13:29:29 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:09:30.276 13:29:29 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:09:30.534 13:29:29 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:09:30.534 13:29:29 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:09:30.534 13:29:29 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:09:30.534 13:29:29 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:09:30.534 13:29:29 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:09:30.534 13:29:29 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:09:30.534 13:29:29 -- common/autotest_common.sh@1531 -- # grep oacs 00:09:30.534 13:29:29 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:09:30.534 13:29:29 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:09:30.534 13:29:29 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:09:30.534 13:29:29 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:09:30.534 13:29:29 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:09:30.534 13:29:29 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:09:30.534 13:29:29 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:09:30.534 13:29:29 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:09:30.534 13:29:29 -- common/autotest_common.sh@1543 -- # continue 00:09:30.534 13:29:29 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:09:30.534 13:29:29 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:09:30.534 13:29:29 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:09:30.534 13:29:29 -- common/autotest_common.sh@1487 -- # grep 0000:00:11.0/nvme/nvme 00:09:30.534 13:29:29 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:09:30.534 13:29:29 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:09:30.534 13:29:29 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:09:30.534 13:29:29 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:09:30.534 13:29:29 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:09:30.534 13:29:29 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:09:30.535 13:29:29 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:09:30.535 13:29:29 -- common/autotest_common.sh@1531 -- # grep oacs 00:09:30.535 13:29:29 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:09:30.535 13:29:29 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:09:30.535 13:29:29 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:09:30.535 13:29:29 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:09:30.535 13:29:29 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:09:30.535 13:29:29 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:09:30.535 13:29:29 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:09:30.535 13:29:29 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:09:30.535 13:29:29 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:09:30.535 13:29:29 -- common/autotest_common.sh@1543 -- # continue 00:09:30.535 13:29:29 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:09:30.535 13:29:29 -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:30.535 13:29:29 -- common/autotest_common.sh@10 -- # set +x 00:09:30.535 13:29:29 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:09:30.535 13:29:29 -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:30.535 13:29:29 -- common/autotest_common.sh@10 -- # set +x 00:09:30.535 13:29:29 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:31.466 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:31.466 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:09:31.466 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:09:31.466 13:29:30 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:09:31.466 13:29:30 -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:31.466 13:29:30 -- common/autotest_common.sh@10 -- # set +x 00:09:31.725 13:29:30 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:09:31.725 13:29:30 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:09:31.725 13:29:30 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:09:31.725 13:29:31 -- common/autotest_common.sh@1563 -- # bdfs=() 00:09:31.725 13:29:31 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:09:31.725 13:29:31 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:09:31.725 13:29:31 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:09:31.725 13:29:31 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:09:31.725 13:29:31 -- common/autotest_common.sh@1498 -- # bdfs=() 00:09:31.725 13:29:31 -- common/autotest_common.sh@1498 -- # local bdfs 00:09:31.725 13:29:31 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:31.725 13:29:31 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:31.725 13:29:31 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:09:31.725 13:29:31 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:09:31.725 13:29:31 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:09:31.725 13:29:31 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:09:31.725 13:29:31 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:09:31.725 13:29:31 -- common/autotest_common.sh@1566 -- # device=0x0010 00:09:31.725 13:29:31 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:09:31.725 13:29:31 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:09:31.725 13:29:31 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:09:31.725 13:29:31 -- common/autotest_common.sh@1566 -- # device=0x0010 00:09:31.725 13:29:31 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:09:31.725 13:29:31 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:09:31.725 13:29:31 -- common/autotest_common.sh@1572 -- # return 0 00:09:31.725 13:29:31 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:09:31.725 13:29:31 -- common/autotest_common.sh@1580 -- # return 0 00:09:31.725 13:29:31 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:09:31.725 13:29:31 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:09:31.725 13:29:31 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:09:31.725 13:29:31 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:09:31.725 13:29:31 -- spdk/autotest.sh@149 -- # timing_enter lib 00:09:31.725 13:29:31 -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:31.725 13:29:31 -- common/autotest_common.sh@10 -- # set +x 00:09:31.725 13:29:31 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:09:31.725 13:29:31 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:09:31.725 13:29:31 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:31.725 13:29:31 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:31.725 13:29:31 -- common/autotest_common.sh@10 -- # set +x 00:09:31.725 ************************************ 00:09:31.725 START TEST env 00:09:31.725 ************************************ 00:09:31.725 13:29:31 env -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:09:31.985 * Looking for test storage... 00:09:31.985 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:09:31.985 13:29:31 env -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:31.985 13:29:31 env -- common/autotest_common.sh@1693 -- # lcov --version 00:09:31.985 13:29:31 env -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:31.985 13:29:31 env -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:31.985 13:29:31 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:31.985 13:29:31 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:31.985 13:29:31 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:31.985 13:29:31 env -- scripts/common.sh@336 -- # IFS=.-: 00:09:31.985 13:29:31 env -- scripts/common.sh@336 -- # read -ra ver1 00:09:31.985 13:29:31 env -- scripts/common.sh@337 -- # IFS=.-: 00:09:31.985 13:29:31 env -- scripts/common.sh@337 -- # read -ra ver2 00:09:31.985 13:29:31 env -- scripts/common.sh@338 -- # local 'op=<' 00:09:31.985 13:29:31 env -- scripts/common.sh@340 -- # ver1_l=2 00:09:31.985 13:29:31 env -- scripts/common.sh@341 -- # ver2_l=1 00:09:31.985 13:29:31 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:31.985 13:29:31 env -- scripts/common.sh@344 -- # case "$op" in 00:09:31.985 13:29:31 env -- scripts/common.sh@345 -- # : 1 00:09:31.985 13:29:31 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:31.985 13:29:31 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:31.985 13:29:31 env -- scripts/common.sh@365 -- # decimal 1 00:09:31.985 13:29:31 env -- scripts/common.sh@353 -- # local d=1 00:09:31.985 13:29:31 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:31.985 13:29:31 env -- scripts/common.sh@355 -- # echo 1 00:09:31.985 13:29:31 env -- scripts/common.sh@365 -- # ver1[v]=1 00:09:31.985 13:29:31 env -- scripts/common.sh@366 -- # decimal 2 00:09:31.985 13:29:31 env -- scripts/common.sh@353 -- # local d=2 00:09:31.985 13:29:31 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:31.985 13:29:31 env -- scripts/common.sh@355 -- # echo 2 00:09:31.985 13:29:31 env -- scripts/common.sh@366 -- # ver2[v]=2 00:09:31.985 13:29:31 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:31.985 13:29:31 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:31.985 13:29:31 env -- scripts/common.sh@368 -- # return 0 00:09:31.985 13:29:31 env -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:31.985 13:29:31 env -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:31.985 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:31.985 --rc genhtml_branch_coverage=1 00:09:31.985 --rc genhtml_function_coverage=1 00:09:31.985 --rc genhtml_legend=1 00:09:31.985 --rc geninfo_all_blocks=1 00:09:31.985 --rc geninfo_unexecuted_blocks=1 00:09:31.985 00:09:31.985 ' 00:09:31.985 13:29:31 env -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:31.985 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:31.985 --rc genhtml_branch_coverage=1 00:09:31.985 --rc genhtml_function_coverage=1 00:09:31.985 --rc genhtml_legend=1 00:09:31.985 --rc geninfo_all_blocks=1 00:09:31.985 --rc geninfo_unexecuted_blocks=1 00:09:31.985 00:09:31.985 ' 00:09:31.985 13:29:31 env -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:31.985 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:31.985 --rc genhtml_branch_coverage=1 00:09:31.985 --rc genhtml_function_coverage=1 00:09:31.985 --rc genhtml_legend=1 00:09:31.985 --rc geninfo_all_blocks=1 00:09:31.985 --rc geninfo_unexecuted_blocks=1 00:09:31.985 00:09:31.985 ' 00:09:31.985 13:29:31 env -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:31.985 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:31.985 --rc genhtml_branch_coverage=1 00:09:31.985 --rc genhtml_function_coverage=1 00:09:31.985 --rc genhtml_legend=1 00:09:31.985 --rc geninfo_all_blocks=1 00:09:31.985 --rc geninfo_unexecuted_blocks=1 00:09:31.985 00:09:31.985 ' 00:09:31.985 13:29:31 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:09:31.985 13:29:31 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:31.985 13:29:31 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:31.985 13:29:31 env -- common/autotest_common.sh@10 -- # set +x 00:09:31.986 ************************************ 00:09:31.986 START TEST env_memory 00:09:31.986 ************************************ 00:09:31.986 13:29:31 env.env_memory -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:09:31.986 00:09:31.986 00:09:31.986 CUnit - A unit testing framework for C - Version 2.1-3 00:09:31.986 http://cunit.sourceforge.net/ 00:09:31.986 00:09:31.986 00:09:31.986 Suite: memory 00:09:31.986 Test: alloc and free memory map ...[2024-11-20 13:29:31.453683] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:09:32.245 passed 00:09:32.245 Test: mem map translation ...[2024-11-20 13:29:31.498706] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:09:32.245 [2024-11-20 13:29:31.498891] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:09:32.245 [2024-11-20 13:29:31.499033] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:09:32.245 [2024-11-20 13:29:31.499119] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:09:32.245 passed 00:09:32.245 Test: mem map registration ...[2024-11-20 13:29:31.571337] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:09:32.245 [2024-11-20 13:29:31.571561] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:09:32.245 passed 00:09:32.245 Test: mem map adjacent registrations ...passed 00:09:32.245 00:09:32.245 Run Summary: Type Total Ran Passed Failed Inactive 00:09:32.245 suites 1 1 n/a 0 0 00:09:32.245 tests 4 4 4 0 0 00:09:32.245 asserts 152 152 152 0 n/a 00:09:32.245 00:09:32.245 Elapsed time = 0.258 seconds 00:09:32.245 ************************************ 00:09:32.245 END TEST env_memory 00:09:32.245 ************************************ 00:09:32.245 00:09:32.245 real 0m0.319s 00:09:32.245 user 0m0.273s 00:09:32.245 sys 0m0.035s 00:09:32.245 13:29:31 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:32.245 13:29:31 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:09:32.504 13:29:31 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:09:32.504 13:29:31 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:32.504 13:29:31 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:32.504 13:29:31 env -- common/autotest_common.sh@10 -- # set +x 00:09:32.504 ************************************ 00:09:32.504 START TEST env_vtophys 00:09:32.504 ************************************ 00:09:32.504 13:29:31 env.env_vtophys -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:09:32.504 EAL: lib.eal log level changed from notice to debug 00:09:32.504 EAL: Detected lcore 0 as core 0 on socket 0 00:09:32.504 EAL: Detected lcore 1 as core 0 on socket 0 00:09:32.504 EAL: Detected lcore 2 as core 0 on socket 0 00:09:32.504 EAL: Detected lcore 3 as core 0 on socket 0 00:09:32.504 EAL: Detected lcore 4 as core 0 on socket 0 00:09:32.504 EAL: Detected lcore 5 as core 0 on socket 0 00:09:32.504 EAL: Detected lcore 6 as core 0 on socket 0 00:09:32.504 EAL: Detected lcore 7 as core 0 on socket 0 00:09:32.504 EAL: Detected lcore 8 as core 0 on socket 0 00:09:32.504 EAL: Detected lcore 9 as core 0 on socket 0 00:09:32.504 EAL: Maximum logical cores by configuration: 128 00:09:32.504 EAL: Detected CPU lcores: 10 00:09:32.504 EAL: Detected NUMA nodes: 1 00:09:32.504 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:09:32.504 EAL: Detected shared linkage of DPDK 00:09:32.504 EAL: No shared files mode enabled, IPC will be disabled 00:09:32.504 EAL: Selected IOVA mode 'PA' 00:09:32.504 EAL: Probing VFIO support... 00:09:32.504 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:09:32.504 EAL: VFIO modules not loaded, skipping VFIO support... 00:09:32.504 EAL: Ask a virtual area of 0x2e000 bytes 00:09:32.504 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:09:32.504 EAL: Setting up physically contiguous memory... 00:09:32.504 EAL: Setting maximum number of open files to 524288 00:09:32.504 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:09:32.504 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:09:32.504 EAL: Ask a virtual area of 0x61000 bytes 00:09:32.504 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:09:32.504 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:09:32.504 EAL: Ask a virtual area of 0x400000000 bytes 00:09:32.504 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:09:32.504 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:09:32.504 EAL: Ask a virtual area of 0x61000 bytes 00:09:32.504 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:09:32.504 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:09:32.504 EAL: Ask a virtual area of 0x400000000 bytes 00:09:32.504 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:09:32.504 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:09:32.504 EAL: Ask a virtual area of 0x61000 bytes 00:09:32.504 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:09:32.504 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:09:32.504 EAL: Ask a virtual area of 0x400000000 bytes 00:09:32.504 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:09:32.504 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:09:32.504 EAL: Ask a virtual area of 0x61000 bytes 00:09:32.504 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:09:32.504 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:09:32.504 EAL: Ask a virtual area of 0x400000000 bytes 00:09:32.504 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:09:32.504 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:09:32.504 EAL: Hugepages will be freed exactly as allocated. 00:09:32.504 EAL: No shared files mode enabled, IPC is disabled 00:09:32.504 EAL: No shared files mode enabled, IPC is disabled 00:09:32.504 EAL: TSC frequency is ~2490000 KHz 00:09:32.504 EAL: Main lcore 0 is ready (tid=7ffb5aaa5a40;cpuset=[0]) 00:09:32.504 EAL: Trying to obtain current memory policy. 00:09:32.504 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:32.504 EAL: Restoring previous memory policy: 0 00:09:32.504 EAL: request: mp_malloc_sync 00:09:32.504 EAL: No shared files mode enabled, IPC is disabled 00:09:32.504 EAL: Heap on socket 0 was expanded by 2MB 00:09:32.504 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:09:32.504 EAL: No PCI address specified using 'addr=' in: bus=pci 00:09:32.504 EAL: Mem event callback 'spdk:(nil)' registered 00:09:32.504 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:09:32.764 00:09:32.764 00:09:32.764 CUnit - A unit testing framework for C - Version 2.1-3 00:09:32.764 http://cunit.sourceforge.net/ 00:09:32.764 00:09:32.764 00:09:32.764 Suite: components_suite 00:09:33.022 Test: vtophys_malloc_test ...passed 00:09:33.022 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:09:33.022 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:33.022 EAL: Restoring previous memory policy: 4 00:09:33.022 EAL: Calling mem event callback 'spdk:(nil)' 00:09:33.022 EAL: request: mp_malloc_sync 00:09:33.022 EAL: No shared files mode enabled, IPC is disabled 00:09:33.022 EAL: Heap on socket 0 was expanded by 4MB 00:09:33.022 EAL: Calling mem event callback 'spdk:(nil)' 00:09:33.022 EAL: request: mp_malloc_sync 00:09:33.022 EAL: No shared files mode enabled, IPC is disabled 00:09:33.022 EAL: Heap on socket 0 was shrunk by 4MB 00:09:33.022 EAL: Trying to obtain current memory policy. 00:09:33.022 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:33.022 EAL: Restoring previous memory policy: 4 00:09:33.022 EAL: Calling mem event callback 'spdk:(nil)' 00:09:33.022 EAL: request: mp_malloc_sync 00:09:33.022 EAL: No shared files mode enabled, IPC is disabled 00:09:33.022 EAL: Heap on socket 0 was expanded by 6MB 00:09:33.022 EAL: Calling mem event callback 'spdk:(nil)' 00:09:33.022 EAL: request: mp_malloc_sync 00:09:33.022 EAL: No shared files mode enabled, IPC is disabled 00:09:33.022 EAL: Heap on socket 0 was shrunk by 6MB 00:09:33.022 EAL: Trying to obtain current memory policy. 00:09:33.022 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:33.022 EAL: Restoring previous memory policy: 4 00:09:33.022 EAL: Calling mem event callback 'spdk:(nil)' 00:09:33.022 EAL: request: mp_malloc_sync 00:09:33.022 EAL: No shared files mode enabled, IPC is disabled 00:09:33.022 EAL: Heap on socket 0 was expanded by 10MB 00:09:33.022 EAL: Calling mem event callback 'spdk:(nil)' 00:09:33.022 EAL: request: mp_malloc_sync 00:09:33.022 EAL: No shared files mode enabled, IPC is disabled 00:09:33.022 EAL: Heap on socket 0 was shrunk by 10MB 00:09:33.281 EAL: Trying to obtain current memory policy. 00:09:33.281 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:33.281 EAL: Restoring previous memory policy: 4 00:09:33.281 EAL: Calling mem event callback 'spdk:(nil)' 00:09:33.281 EAL: request: mp_malloc_sync 00:09:33.281 EAL: No shared files mode enabled, IPC is disabled 00:09:33.281 EAL: Heap on socket 0 was expanded by 18MB 00:09:33.281 EAL: Calling mem event callback 'spdk:(nil)' 00:09:33.281 EAL: request: mp_malloc_sync 00:09:33.281 EAL: No shared files mode enabled, IPC is disabled 00:09:33.281 EAL: Heap on socket 0 was shrunk by 18MB 00:09:33.281 EAL: Trying to obtain current memory policy. 00:09:33.281 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:33.281 EAL: Restoring previous memory policy: 4 00:09:33.281 EAL: Calling mem event callback 'spdk:(nil)' 00:09:33.281 EAL: request: mp_malloc_sync 00:09:33.281 EAL: No shared files mode enabled, IPC is disabled 00:09:33.281 EAL: Heap on socket 0 was expanded by 34MB 00:09:33.281 EAL: Calling mem event callback 'spdk:(nil)' 00:09:33.281 EAL: request: mp_malloc_sync 00:09:33.281 EAL: No shared files mode enabled, IPC is disabled 00:09:33.281 EAL: Heap on socket 0 was shrunk by 34MB 00:09:33.281 EAL: Trying to obtain current memory policy. 00:09:33.281 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:33.281 EAL: Restoring previous memory policy: 4 00:09:33.281 EAL: Calling mem event callback 'spdk:(nil)' 00:09:33.281 EAL: request: mp_malloc_sync 00:09:33.281 EAL: No shared files mode enabled, IPC is disabled 00:09:33.281 EAL: Heap on socket 0 was expanded by 66MB 00:09:33.591 EAL: Calling mem event callback 'spdk:(nil)' 00:09:33.591 EAL: request: mp_malloc_sync 00:09:33.591 EAL: No shared files mode enabled, IPC is disabled 00:09:33.591 EAL: Heap on socket 0 was shrunk by 66MB 00:09:33.591 EAL: Trying to obtain current memory policy. 00:09:33.591 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:33.591 EAL: Restoring previous memory policy: 4 00:09:33.591 EAL: Calling mem event callback 'spdk:(nil)' 00:09:33.591 EAL: request: mp_malloc_sync 00:09:33.591 EAL: No shared files mode enabled, IPC is disabled 00:09:33.591 EAL: Heap on socket 0 was expanded by 130MB 00:09:33.851 EAL: Calling mem event callback 'spdk:(nil)' 00:09:33.851 EAL: request: mp_malloc_sync 00:09:33.851 EAL: No shared files mode enabled, IPC is disabled 00:09:33.851 EAL: Heap on socket 0 was shrunk by 130MB 00:09:34.109 EAL: Trying to obtain current memory policy. 00:09:34.109 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:34.109 EAL: Restoring previous memory policy: 4 00:09:34.109 EAL: Calling mem event callback 'spdk:(nil)' 00:09:34.109 EAL: request: mp_malloc_sync 00:09:34.109 EAL: No shared files mode enabled, IPC is disabled 00:09:34.109 EAL: Heap on socket 0 was expanded by 258MB 00:09:34.676 EAL: Calling mem event callback 'spdk:(nil)' 00:09:34.676 EAL: request: mp_malloc_sync 00:09:34.676 EAL: No shared files mode enabled, IPC is disabled 00:09:34.676 EAL: Heap on socket 0 was shrunk by 258MB 00:09:34.934 EAL: Trying to obtain current memory policy. 00:09:34.934 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:35.192 EAL: Restoring previous memory policy: 4 00:09:35.192 EAL: Calling mem event callback 'spdk:(nil)' 00:09:35.192 EAL: request: mp_malloc_sync 00:09:35.192 EAL: No shared files mode enabled, IPC is disabled 00:09:35.192 EAL: Heap on socket 0 was expanded by 514MB 00:09:36.128 EAL: Calling mem event callback 'spdk:(nil)' 00:09:36.128 EAL: request: mp_malloc_sync 00:09:36.128 EAL: No shared files mode enabled, IPC is disabled 00:09:36.128 EAL: Heap on socket 0 was shrunk by 514MB 00:09:37.067 EAL: Trying to obtain current memory policy. 00:09:37.067 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:37.067 EAL: Restoring previous memory policy: 4 00:09:37.067 EAL: Calling mem event callback 'spdk:(nil)' 00:09:37.067 EAL: request: mp_malloc_sync 00:09:37.067 EAL: No shared files mode enabled, IPC is disabled 00:09:37.067 EAL: Heap on socket 0 was expanded by 1026MB 00:09:38.998 EAL: Calling mem event callback 'spdk:(nil)' 00:09:39.257 EAL: request: mp_malloc_sync 00:09:39.257 EAL: No shared files mode enabled, IPC is disabled 00:09:39.257 EAL: Heap on socket 0 was shrunk by 1026MB 00:09:41.161 passed 00:09:41.161 00:09:41.161 Run Summary: Type Total Ran Passed Failed Inactive 00:09:41.161 suites 1 1 n/a 0 0 00:09:41.161 tests 2 2 2 0 0 00:09:41.161 asserts 5621 5621 5621 0 n/a 00:09:41.161 00:09:41.161 Elapsed time = 8.138 seconds 00:09:41.161 EAL: Calling mem event callback 'spdk:(nil)' 00:09:41.161 EAL: request: mp_malloc_sync 00:09:41.161 EAL: No shared files mode enabled, IPC is disabled 00:09:41.161 EAL: Heap on socket 0 was shrunk by 2MB 00:09:41.161 EAL: No shared files mode enabled, IPC is disabled 00:09:41.161 EAL: No shared files mode enabled, IPC is disabled 00:09:41.161 EAL: No shared files mode enabled, IPC is disabled 00:09:41.161 00:09:41.161 real 0m8.486s 00:09:41.161 user 0m7.480s 00:09:41.161 sys 0m0.843s 00:09:41.161 13:29:40 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:41.161 ************************************ 00:09:41.161 END TEST env_vtophys 00:09:41.161 ************************************ 00:09:41.161 13:29:40 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:09:41.161 13:29:40 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:09:41.161 13:29:40 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:41.161 13:29:40 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:41.161 13:29:40 env -- common/autotest_common.sh@10 -- # set +x 00:09:41.161 ************************************ 00:09:41.161 START TEST env_pci 00:09:41.161 ************************************ 00:09:41.161 13:29:40 env.env_pci -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:09:41.161 00:09:41.161 00:09:41.161 CUnit - A unit testing framework for C - Version 2.1-3 00:09:41.161 http://cunit.sourceforge.net/ 00:09:41.161 00:09:41.161 00:09:41.161 Suite: pci 00:09:41.161 Test: pci_hook ...[2024-11-20 13:29:40.371264] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 56543 has claimed it 00:09:41.161 EAL: Cannot find device (10000:00:01.0) 00:09:41.161 passed 00:09:41.161 00:09:41.161 Run Summary: Type Total Ran Passed Failed Inactive 00:09:41.161 suites 1 1 n/a 0 0 00:09:41.161 tests 1 1 1 0 0 00:09:41.161 asserts 25 25 25 0 n/a 00:09:41.161 00:09:41.161 Elapsed time = 0.009 seconds 00:09:41.161 EAL: Failed to attach device on primary process 00:09:41.161 00:09:41.161 real 0m0.117s 00:09:41.161 user 0m0.051s 00:09:41.161 sys 0m0.065s 00:09:41.161 13:29:40 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:41.161 ************************************ 00:09:41.161 END TEST env_pci 00:09:41.161 ************************************ 00:09:41.161 13:29:40 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:09:41.161 13:29:40 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:09:41.161 13:29:40 env -- env/env.sh@15 -- # uname 00:09:41.161 13:29:40 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:09:41.161 13:29:40 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:09:41.161 13:29:40 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:09:41.161 13:29:40 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:41.161 13:29:40 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:41.161 13:29:40 env -- common/autotest_common.sh@10 -- # set +x 00:09:41.161 ************************************ 00:09:41.161 START TEST env_dpdk_post_init 00:09:41.161 ************************************ 00:09:41.161 13:29:40 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:09:41.161 EAL: Detected CPU lcores: 10 00:09:41.161 EAL: Detected NUMA nodes: 1 00:09:41.161 EAL: Detected shared linkage of DPDK 00:09:41.161 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:09:41.161 EAL: Selected IOVA mode 'PA' 00:09:41.420 TELEMETRY: No legacy callbacks, legacy socket not created 00:09:41.420 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:09:41.420 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:09:41.420 Starting DPDK initialization... 00:09:41.420 Starting SPDK post initialization... 00:09:41.420 SPDK NVMe probe 00:09:41.420 Attaching to 0000:00:10.0 00:09:41.420 Attaching to 0000:00:11.0 00:09:41.420 Attached to 0000:00:10.0 00:09:41.420 Attached to 0000:00:11.0 00:09:41.420 Cleaning up... 00:09:41.420 00:09:41.420 real 0m0.291s 00:09:41.420 user 0m0.097s 00:09:41.420 sys 0m0.095s 00:09:41.420 13:29:40 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:41.420 ************************************ 00:09:41.420 END TEST env_dpdk_post_init 00:09:41.420 ************************************ 00:09:41.420 13:29:40 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:09:41.420 13:29:40 env -- env/env.sh@26 -- # uname 00:09:41.420 13:29:40 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:09:41.420 13:29:40 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:09:41.420 13:29:40 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:41.420 13:29:40 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:41.420 13:29:40 env -- common/autotest_common.sh@10 -- # set +x 00:09:41.420 ************************************ 00:09:41.420 START TEST env_mem_callbacks 00:09:41.420 ************************************ 00:09:41.420 13:29:40 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:09:41.679 EAL: Detected CPU lcores: 10 00:09:41.679 EAL: Detected NUMA nodes: 1 00:09:41.679 EAL: Detected shared linkage of DPDK 00:09:41.679 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:09:41.679 EAL: Selected IOVA mode 'PA' 00:09:41.679 TELEMETRY: No legacy callbacks, legacy socket not created 00:09:41.679 00:09:41.679 00:09:41.679 CUnit - A unit testing framework for C - Version 2.1-3 00:09:41.679 http://cunit.sourceforge.net/ 00:09:41.679 00:09:41.679 00:09:41.679 Suite: memory 00:09:41.679 Test: test ... 00:09:41.679 register 0x200000200000 2097152 00:09:41.679 malloc 3145728 00:09:41.679 register 0x200000400000 4194304 00:09:41.679 buf 0x2000004fffc0 len 3145728 PASSED 00:09:41.679 malloc 64 00:09:41.679 buf 0x2000004ffec0 len 64 PASSED 00:09:41.679 malloc 4194304 00:09:41.679 register 0x200000800000 6291456 00:09:41.679 buf 0x2000009fffc0 len 4194304 PASSED 00:09:41.679 free 0x2000004fffc0 3145728 00:09:41.679 free 0x2000004ffec0 64 00:09:41.679 unregister 0x200000400000 4194304 PASSED 00:09:41.679 free 0x2000009fffc0 4194304 00:09:41.679 unregister 0x200000800000 6291456 PASSED 00:09:41.679 malloc 8388608 00:09:41.679 register 0x200000400000 10485760 00:09:41.679 buf 0x2000005fffc0 len 8388608 PASSED 00:09:41.679 free 0x2000005fffc0 8388608 00:09:41.679 unregister 0x200000400000 10485760 PASSED 00:09:41.939 passed 00:09:41.939 00:09:41.939 Run Summary: Type Total Ran Passed Failed Inactive 00:09:41.939 suites 1 1 n/a 0 0 00:09:41.939 tests 1 1 1 0 0 00:09:41.939 asserts 15 15 15 0 n/a 00:09:41.939 00:09:41.939 Elapsed time = 0.083 seconds 00:09:41.939 00:09:41.939 real 0m0.292s 00:09:41.939 user 0m0.110s 00:09:41.939 sys 0m0.078s 00:09:41.939 ************************************ 00:09:41.939 END TEST env_mem_callbacks 00:09:41.939 ************************************ 00:09:41.939 13:29:41 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:41.939 13:29:41 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:09:41.939 ************************************ 00:09:41.939 END TEST env 00:09:41.939 ************************************ 00:09:41.939 00:09:41.939 real 0m10.115s 00:09:41.939 user 0m8.247s 00:09:41.939 sys 0m1.495s 00:09:41.939 13:29:41 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:41.939 13:29:41 env -- common/autotest_common.sh@10 -- # set +x 00:09:41.939 13:29:41 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:09:41.939 13:29:41 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:41.939 13:29:41 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:41.939 13:29:41 -- common/autotest_common.sh@10 -- # set +x 00:09:41.939 ************************************ 00:09:41.939 START TEST rpc 00:09:41.939 ************************************ 00:09:41.939 13:29:41 rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:09:42.198 * Looking for test storage... 00:09:42.199 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:09:42.199 13:29:41 rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:42.199 13:29:41 rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:09:42.199 13:29:41 rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:42.199 13:29:41 rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:42.199 13:29:41 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:42.199 13:29:41 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:42.199 13:29:41 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:42.199 13:29:41 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:09:42.199 13:29:41 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:09:42.199 13:29:41 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:09:42.199 13:29:41 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:09:42.199 13:29:41 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:09:42.199 13:29:41 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:09:42.199 13:29:41 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:09:42.199 13:29:41 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:42.199 13:29:41 rpc -- scripts/common.sh@344 -- # case "$op" in 00:09:42.199 13:29:41 rpc -- scripts/common.sh@345 -- # : 1 00:09:42.199 13:29:41 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:42.199 13:29:41 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:42.199 13:29:41 rpc -- scripts/common.sh@365 -- # decimal 1 00:09:42.199 13:29:41 rpc -- scripts/common.sh@353 -- # local d=1 00:09:42.199 13:29:41 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:42.199 13:29:41 rpc -- scripts/common.sh@355 -- # echo 1 00:09:42.199 13:29:41 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:09:42.199 13:29:41 rpc -- scripts/common.sh@366 -- # decimal 2 00:09:42.199 13:29:41 rpc -- scripts/common.sh@353 -- # local d=2 00:09:42.199 13:29:41 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:42.199 13:29:41 rpc -- scripts/common.sh@355 -- # echo 2 00:09:42.199 13:29:41 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:09:42.199 13:29:41 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:42.199 13:29:41 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:42.199 13:29:41 rpc -- scripts/common.sh@368 -- # return 0 00:09:42.199 13:29:41 rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:42.199 13:29:41 rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:42.199 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:42.199 --rc genhtml_branch_coverage=1 00:09:42.199 --rc genhtml_function_coverage=1 00:09:42.199 --rc genhtml_legend=1 00:09:42.199 --rc geninfo_all_blocks=1 00:09:42.199 --rc geninfo_unexecuted_blocks=1 00:09:42.199 00:09:42.199 ' 00:09:42.199 13:29:41 rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:42.199 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:42.199 --rc genhtml_branch_coverage=1 00:09:42.199 --rc genhtml_function_coverage=1 00:09:42.199 --rc genhtml_legend=1 00:09:42.199 --rc geninfo_all_blocks=1 00:09:42.199 --rc geninfo_unexecuted_blocks=1 00:09:42.199 00:09:42.199 ' 00:09:42.199 13:29:41 rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:42.199 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:42.199 --rc genhtml_branch_coverage=1 00:09:42.199 --rc genhtml_function_coverage=1 00:09:42.199 --rc genhtml_legend=1 00:09:42.199 --rc geninfo_all_blocks=1 00:09:42.199 --rc geninfo_unexecuted_blocks=1 00:09:42.199 00:09:42.199 ' 00:09:42.199 13:29:41 rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:42.199 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:42.199 --rc genhtml_branch_coverage=1 00:09:42.199 --rc genhtml_function_coverage=1 00:09:42.199 --rc genhtml_legend=1 00:09:42.199 --rc geninfo_all_blocks=1 00:09:42.199 --rc geninfo_unexecuted_blocks=1 00:09:42.199 00:09:42.199 ' 00:09:42.199 13:29:41 rpc -- rpc/rpc.sh@65 -- # spdk_pid=56670 00:09:42.199 13:29:41 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:09:42.199 13:29:41 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:09:42.199 13:29:41 rpc -- rpc/rpc.sh@67 -- # waitforlisten 56670 00:09:42.199 13:29:41 rpc -- common/autotest_common.sh@835 -- # '[' -z 56670 ']' 00:09:42.199 13:29:41 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:42.199 13:29:41 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:42.199 13:29:41 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:42.199 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:42.199 13:29:41 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:42.199 13:29:41 rpc -- common/autotest_common.sh@10 -- # set +x 00:09:42.199 [2024-11-20 13:29:41.662379] Starting SPDK v25.01-pre git sha1 82b85d9ca / DPDK 24.03.0 initialization... 00:09:42.199 [2024-11-20 13:29:41.662726] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56670 ] 00:09:42.461 [2024-11-20 13:29:41.842271] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:42.720 [2024-11-20 13:29:41.960216] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:09:42.720 [2024-11-20 13:29:41.960449] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 56670' to capture a snapshot of events at runtime. 00:09:42.720 [2024-11-20 13:29:41.960554] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:42.720 [2024-11-20 13:29:41.960677] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:42.720 [2024-11-20 13:29:41.960693] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid56670 for offline analysis/debug. 00:09:42.720 [2024-11-20 13:29:41.961974] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:43.656 13:29:42 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:43.656 13:29:42 rpc -- common/autotest_common.sh@868 -- # return 0 00:09:43.656 13:29:42 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:09:43.656 13:29:42 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:09:43.656 13:29:42 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:09:43.656 13:29:42 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:09:43.656 13:29:42 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:43.656 13:29:42 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:43.656 13:29:42 rpc -- common/autotest_common.sh@10 -- # set +x 00:09:43.656 ************************************ 00:09:43.656 START TEST rpc_integrity 00:09:43.656 ************************************ 00:09:43.656 13:29:42 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:09:43.656 13:29:42 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:09:43.656 13:29:42 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.657 13:29:42 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:43.657 13:29:42 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.657 13:29:42 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:09:43.657 13:29:42 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:09:43.657 13:29:42 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:09:43.657 13:29:42 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:09:43.657 13:29:42 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.657 13:29:42 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:43.657 13:29:42 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.657 13:29:42 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:09:43.657 13:29:42 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:09:43.657 13:29:42 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.657 13:29:42 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:43.657 13:29:42 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.657 13:29:42 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:09:43.657 { 00:09:43.657 "name": "Malloc0", 00:09:43.657 "aliases": [ 00:09:43.657 "ab734039-1e80-4c2c-9c1f-ae391e37b9e6" 00:09:43.657 ], 00:09:43.657 "product_name": "Malloc disk", 00:09:43.657 "block_size": 512, 00:09:43.657 "num_blocks": 16384, 00:09:43.657 "uuid": "ab734039-1e80-4c2c-9c1f-ae391e37b9e6", 00:09:43.657 "assigned_rate_limits": { 00:09:43.657 "rw_ios_per_sec": 0, 00:09:43.657 "rw_mbytes_per_sec": 0, 00:09:43.657 "r_mbytes_per_sec": 0, 00:09:43.657 "w_mbytes_per_sec": 0 00:09:43.657 }, 00:09:43.657 "claimed": false, 00:09:43.657 "zoned": false, 00:09:43.657 "supported_io_types": { 00:09:43.657 "read": true, 00:09:43.657 "write": true, 00:09:43.657 "unmap": true, 00:09:43.657 "flush": true, 00:09:43.657 "reset": true, 00:09:43.657 "nvme_admin": false, 00:09:43.657 "nvme_io": false, 00:09:43.657 "nvme_io_md": false, 00:09:43.657 "write_zeroes": true, 00:09:43.657 "zcopy": true, 00:09:43.657 "get_zone_info": false, 00:09:43.657 "zone_management": false, 00:09:43.657 "zone_append": false, 00:09:43.657 "compare": false, 00:09:43.657 "compare_and_write": false, 00:09:43.657 "abort": true, 00:09:43.657 "seek_hole": false, 00:09:43.657 "seek_data": false, 00:09:43.657 "copy": true, 00:09:43.657 "nvme_iov_md": false 00:09:43.657 }, 00:09:43.657 "memory_domains": [ 00:09:43.657 { 00:09:43.657 "dma_device_id": "system", 00:09:43.657 "dma_device_type": 1 00:09:43.657 }, 00:09:43.657 { 00:09:43.657 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:43.657 "dma_device_type": 2 00:09:43.657 } 00:09:43.657 ], 00:09:43.657 "driver_specific": {} 00:09:43.657 } 00:09:43.657 ]' 00:09:43.657 13:29:42 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:09:43.657 13:29:43 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:09:43.657 13:29:43 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:09:43.657 13:29:43 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.657 13:29:43 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:43.657 [2024-11-20 13:29:43.033460] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:09:43.657 [2024-11-20 13:29:43.033537] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:43.657 [2024-11-20 13:29:43.033564] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:09:43.657 [2024-11-20 13:29:43.033591] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:43.657 [2024-11-20 13:29:43.036316] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:43.657 [2024-11-20 13:29:43.036486] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:09:43.657 Passthru0 00:09:43.657 13:29:43 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.657 13:29:43 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:09:43.657 13:29:43 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.657 13:29:43 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:43.657 13:29:43 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.657 13:29:43 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:09:43.657 { 00:09:43.657 "name": "Malloc0", 00:09:43.657 "aliases": [ 00:09:43.657 "ab734039-1e80-4c2c-9c1f-ae391e37b9e6" 00:09:43.657 ], 00:09:43.657 "product_name": "Malloc disk", 00:09:43.657 "block_size": 512, 00:09:43.657 "num_blocks": 16384, 00:09:43.657 "uuid": "ab734039-1e80-4c2c-9c1f-ae391e37b9e6", 00:09:43.657 "assigned_rate_limits": { 00:09:43.657 "rw_ios_per_sec": 0, 00:09:43.657 "rw_mbytes_per_sec": 0, 00:09:43.657 "r_mbytes_per_sec": 0, 00:09:43.657 "w_mbytes_per_sec": 0 00:09:43.657 }, 00:09:43.657 "claimed": true, 00:09:43.657 "claim_type": "exclusive_write", 00:09:43.657 "zoned": false, 00:09:43.657 "supported_io_types": { 00:09:43.657 "read": true, 00:09:43.657 "write": true, 00:09:43.657 "unmap": true, 00:09:43.657 "flush": true, 00:09:43.657 "reset": true, 00:09:43.657 "nvme_admin": false, 00:09:43.657 "nvme_io": false, 00:09:43.657 "nvme_io_md": false, 00:09:43.657 "write_zeroes": true, 00:09:43.657 "zcopy": true, 00:09:43.657 "get_zone_info": false, 00:09:43.657 "zone_management": false, 00:09:43.657 "zone_append": false, 00:09:43.657 "compare": false, 00:09:43.657 "compare_and_write": false, 00:09:43.657 "abort": true, 00:09:43.657 "seek_hole": false, 00:09:43.657 "seek_data": false, 00:09:43.657 "copy": true, 00:09:43.657 "nvme_iov_md": false 00:09:43.657 }, 00:09:43.657 "memory_domains": [ 00:09:43.657 { 00:09:43.657 "dma_device_id": "system", 00:09:43.657 "dma_device_type": 1 00:09:43.657 }, 00:09:43.657 { 00:09:43.657 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:43.657 "dma_device_type": 2 00:09:43.657 } 00:09:43.657 ], 00:09:43.657 "driver_specific": {} 00:09:43.657 }, 00:09:43.657 { 00:09:43.657 "name": "Passthru0", 00:09:43.657 "aliases": [ 00:09:43.657 "d3ffc6e6-3958-5bb1-8e6b-6fdfc40f4d44" 00:09:43.657 ], 00:09:43.657 "product_name": "passthru", 00:09:43.657 "block_size": 512, 00:09:43.657 "num_blocks": 16384, 00:09:43.657 "uuid": "d3ffc6e6-3958-5bb1-8e6b-6fdfc40f4d44", 00:09:43.657 "assigned_rate_limits": { 00:09:43.657 "rw_ios_per_sec": 0, 00:09:43.657 "rw_mbytes_per_sec": 0, 00:09:43.657 "r_mbytes_per_sec": 0, 00:09:43.657 "w_mbytes_per_sec": 0 00:09:43.657 }, 00:09:43.657 "claimed": false, 00:09:43.657 "zoned": false, 00:09:43.657 "supported_io_types": { 00:09:43.657 "read": true, 00:09:43.657 "write": true, 00:09:43.657 "unmap": true, 00:09:43.657 "flush": true, 00:09:43.657 "reset": true, 00:09:43.657 "nvme_admin": false, 00:09:43.657 "nvme_io": false, 00:09:43.657 "nvme_io_md": false, 00:09:43.657 "write_zeroes": true, 00:09:43.657 "zcopy": true, 00:09:43.657 "get_zone_info": false, 00:09:43.657 "zone_management": false, 00:09:43.657 "zone_append": false, 00:09:43.657 "compare": false, 00:09:43.657 "compare_and_write": false, 00:09:43.657 "abort": true, 00:09:43.657 "seek_hole": false, 00:09:43.657 "seek_data": false, 00:09:43.657 "copy": true, 00:09:43.657 "nvme_iov_md": false 00:09:43.657 }, 00:09:43.657 "memory_domains": [ 00:09:43.657 { 00:09:43.657 "dma_device_id": "system", 00:09:43.657 "dma_device_type": 1 00:09:43.657 }, 00:09:43.657 { 00:09:43.657 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:43.657 "dma_device_type": 2 00:09:43.657 } 00:09:43.657 ], 00:09:43.657 "driver_specific": { 00:09:43.657 "passthru": { 00:09:43.657 "name": "Passthru0", 00:09:43.657 "base_bdev_name": "Malloc0" 00:09:43.657 } 00:09:43.657 } 00:09:43.657 } 00:09:43.657 ]' 00:09:43.657 13:29:43 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:09:43.657 13:29:43 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:09:43.657 13:29:43 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:09:43.657 13:29:43 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.657 13:29:43 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:43.657 13:29:43 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.657 13:29:43 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:09:43.657 13:29:43 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.657 13:29:43 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:43.917 13:29:43 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.917 13:29:43 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:09:43.917 13:29:43 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.917 13:29:43 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:43.917 13:29:43 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.917 13:29:43 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:09:43.917 13:29:43 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:09:43.917 ************************************ 00:09:43.917 END TEST rpc_integrity 00:09:43.917 ************************************ 00:09:43.917 13:29:43 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:09:43.917 00:09:43.917 real 0m0.342s 00:09:43.917 user 0m0.174s 00:09:43.917 sys 0m0.066s 00:09:43.917 13:29:43 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:43.917 13:29:43 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:43.917 13:29:43 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:09:43.917 13:29:43 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:43.917 13:29:43 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:43.917 13:29:43 rpc -- common/autotest_common.sh@10 -- # set +x 00:09:43.917 ************************************ 00:09:43.917 START TEST rpc_plugins 00:09:43.917 ************************************ 00:09:43.917 13:29:43 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:09:43.917 13:29:43 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:09:43.917 13:29:43 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.917 13:29:43 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:09:43.917 13:29:43 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.917 13:29:43 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:09:43.917 13:29:43 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:09:43.917 13:29:43 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.917 13:29:43 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:09:43.917 13:29:43 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.917 13:29:43 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:09:43.917 { 00:09:43.917 "name": "Malloc1", 00:09:43.917 "aliases": [ 00:09:43.917 "542b7d2b-568d-489c-8035-f5e3b9c2d721" 00:09:43.917 ], 00:09:43.917 "product_name": "Malloc disk", 00:09:43.917 "block_size": 4096, 00:09:43.917 "num_blocks": 256, 00:09:43.917 "uuid": "542b7d2b-568d-489c-8035-f5e3b9c2d721", 00:09:43.917 "assigned_rate_limits": { 00:09:43.917 "rw_ios_per_sec": 0, 00:09:43.917 "rw_mbytes_per_sec": 0, 00:09:43.917 "r_mbytes_per_sec": 0, 00:09:43.917 "w_mbytes_per_sec": 0 00:09:43.917 }, 00:09:43.917 "claimed": false, 00:09:43.917 "zoned": false, 00:09:43.917 "supported_io_types": { 00:09:43.917 "read": true, 00:09:43.917 "write": true, 00:09:43.917 "unmap": true, 00:09:43.917 "flush": true, 00:09:43.917 "reset": true, 00:09:43.917 "nvme_admin": false, 00:09:43.917 "nvme_io": false, 00:09:43.917 "nvme_io_md": false, 00:09:43.917 "write_zeroes": true, 00:09:43.917 "zcopy": true, 00:09:43.917 "get_zone_info": false, 00:09:43.917 "zone_management": false, 00:09:43.917 "zone_append": false, 00:09:43.917 "compare": false, 00:09:43.917 "compare_and_write": false, 00:09:43.917 "abort": true, 00:09:43.917 "seek_hole": false, 00:09:43.917 "seek_data": false, 00:09:43.917 "copy": true, 00:09:43.917 "nvme_iov_md": false 00:09:43.917 }, 00:09:43.917 "memory_domains": [ 00:09:43.917 { 00:09:43.917 "dma_device_id": "system", 00:09:43.917 "dma_device_type": 1 00:09:43.917 }, 00:09:43.917 { 00:09:43.917 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:43.917 "dma_device_type": 2 00:09:43.917 } 00:09:43.917 ], 00:09:43.917 "driver_specific": {} 00:09:43.917 } 00:09:43.917 ]' 00:09:43.917 13:29:43 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:09:43.917 13:29:43 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:09:43.917 13:29:43 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:09:43.917 13:29:43 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.917 13:29:43 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:09:43.917 13:29:43 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.917 13:29:43 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:09:43.917 13:29:43 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.917 13:29:43 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:09:43.917 13:29:43 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.176 13:29:43 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:09:44.176 13:29:43 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:09:44.176 ************************************ 00:09:44.176 END TEST rpc_plugins 00:09:44.176 ************************************ 00:09:44.176 13:29:43 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:09:44.176 00:09:44.176 real 0m0.159s 00:09:44.176 user 0m0.091s 00:09:44.176 sys 0m0.031s 00:09:44.176 13:29:43 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:44.176 13:29:43 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:09:44.176 13:29:43 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:09:44.176 13:29:43 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:44.176 13:29:43 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:44.176 13:29:43 rpc -- common/autotest_common.sh@10 -- # set +x 00:09:44.176 ************************************ 00:09:44.177 START TEST rpc_trace_cmd_test 00:09:44.177 ************************************ 00:09:44.177 13:29:43 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:09:44.177 13:29:43 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:09:44.177 13:29:43 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:09:44.177 13:29:43 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.177 13:29:43 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.177 13:29:43 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.177 13:29:43 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:09:44.177 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid56670", 00:09:44.177 "tpoint_group_mask": "0x8", 00:09:44.177 "iscsi_conn": { 00:09:44.177 "mask": "0x2", 00:09:44.177 "tpoint_mask": "0x0" 00:09:44.177 }, 00:09:44.177 "scsi": { 00:09:44.177 "mask": "0x4", 00:09:44.177 "tpoint_mask": "0x0" 00:09:44.177 }, 00:09:44.177 "bdev": { 00:09:44.177 "mask": "0x8", 00:09:44.177 "tpoint_mask": "0xffffffffffffffff" 00:09:44.177 }, 00:09:44.177 "nvmf_rdma": { 00:09:44.177 "mask": "0x10", 00:09:44.177 "tpoint_mask": "0x0" 00:09:44.177 }, 00:09:44.177 "nvmf_tcp": { 00:09:44.177 "mask": "0x20", 00:09:44.177 "tpoint_mask": "0x0" 00:09:44.177 }, 00:09:44.177 "ftl": { 00:09:44.177 "mask": "0x40", 00:09:44.177 "tpoint_mask": "0x0" 00:09:44.177 }, 00:09:44.177 "blobfs": { 00:09:44.177 "mask": "0x80", 00:09:44.177 "tpoint_mask": "0x0" 00:09:44.177 }, 00:09:44.177 "dsa": { 00:09:44.177 "mask": "0x200", 00:09:44.177 "tpoint_mask": "0x0" 00:09:44.177 }, 00:09:44.177 "thread": { 00:09:44.177 "mask": "0x400", 00:09:44.177 "tpoint_mask": "0x0" 00:09:44.177 }, 00:09:44.177 "nvme_pcie": { 00:09:44.177 "mask": "0x800", 00:09:44.177 "tpoint_mask": "0x0" 00:09:44.177 }, 00:09:44.177 "iaa": { 00:09:44.177 "mask": "0x1000", 00:09:44.177 "tpoint_mask": "0x0" 00:09:44.177 }, 00:09:44.177 "nvme_tcp": { 00:09:44.177 "mask": "0x2000", 00:09:44.177 "tpoint_mask": "0x0" 00:09:44.177 }, 00:09:44.177 "bdev_nvme": { 00:09:44.177 "mask": "0x4000", 00:09:44.177 "tpoint_mask": "0x0" 00:09:44.177 }, 00:09:44.177 "sock": { 00:09:44.177 "mask": "0x8000", 00:09:44.177 "tpoint_mask": "0x0" 00:09:44.177 }, 00:09:44.177 "blob": { 00:09:44.177 "mask": "0x10000", 00:09:44.177 "tpoint_mask": "0x0" 00:09:44.177 }, 00:09:44.177 "bdev_raid": { 00:09:44.177 "mask": "0x20000", 00:09:44.177 "tpoint_mask": "0x0" 00:09:44.177 }, 00:09:44.177 "scheduler": { 00:09:44.177 "mask": "0x40000", 00:09:44.177 "tpoint_mask": "0x0" 00:09:44.177 } 00:09:44.177 }' 00:09:44.177 13:29:43 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:09:44.177 13:29:43 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:09:44.177 13:29:43 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:09:44.177 13:29:43 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:09:44.177 13:29:43 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:09:44.436 13:29:43 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:09:44.436 13:29:43 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:09:44.436 13:29:43 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:09:44.436 13:29:43 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:09:44.436 13:29:43 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:09:44.436 ************************************ 00:09:44.436 END TEST rpc_trace_cmd_test 00:09:44.436 ************************************ 00:09:44.436 00:09:44.436 real 0m0.252s 00:09:44.436 user 0m0.200s 00:09:44.436 sys 0m0.041s 00:09:44.436 13:29:43 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:44.436 13:29:43 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.436 13:29:43 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:09:44.436 13:29:43 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:09:44.436 13:29:43 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:09:44.436 13:29:43 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:44.436 13:29:43 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:44.436 13:29:43 rpc -- common/autotest_common.sh@10 -- # set +x 00:09:44.436 ************************************ 00:09:44.436 START TEST rpc_daemon_integrity 00:09:44.436 ************************************ 00:09:44.436 13:29:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:09:44.436 13:29:43 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:09:44.436 13:29:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.436 13:29:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:44.436 13:29:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.436 13:29:43 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:09:44.436 13:29:43 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:09:44.436 13:29:43 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:09:44.436 13:29:43 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:09:44.436 13:29:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.436 13:29:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:44.436 13:29:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.436 13:29:43 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:09:44.695 13:29:43 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:09:44.695 13:29:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.695 13:29:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:44.695 13:29:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.695 13:29:43 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:09:44.695 { 00:09:44.695 "name": "Malloc2", 00:09:44.695 "aliases": [ 00:09:44.695 "3cba1386-b886-45fb-a1e1-35124f64e511" 00:09:44.695 ], 00:09:44.695 "product_name": "Malloc disk", 00:09:44.695 "block_size": 512, 00:09:44.695 "num_blocks": 16384, 00:09:44.695 "uuid": "3cba1386-b886-45fb-a1e1-35124f64e511", 00:09:44.695 "assigned_rate_limits": { 00:09:44.695 "rw_ios_per_sec": 0, 00:09:44.695 "rw_mbytes_per_sec": 0, 00:09:44.695 "r_mbytes_per_sec": 0, 00:09:44.695 "w_mbytes_per_sec": 0 00:09:44.695 }, 00:09:44.695 "claimed": false, 00:09:44.695 "zoned": false, 00:09:44.695 "supported_io_types": { 00:09:44.695 "read": true, 00:09:44.695 "write": true, 00:09:44.695 "unmap": true, 00:09:44.695 "flush": true, 00:09:44.695 "reset": true, 00:09:44.695 "nvme_admin": false, 00:09:44.695 "nvme_io": false, 00:09:44.695 "nvme_io_md": false, 00:09:44.695 "write_zeroes": true, 00:09:44.695 "zcopy": true, 00:09:44.695 "get_zone_info": false, 00:09:44.695 "zone_management": false, 00:09:44.695 "zone_append": false, 00:09:44.695 "compare": false, 00:09:44.695 "compare_and_write": false, 00:09:44.695 "abort": true, 00:09:44.695 "seek_hole": false, 00:09:44.695 "seek_data": false, 00:09:44.695 "copy": true, 00:09:44.695 "nvme_iov_md": false 00:09:44.695 }, 00:09:44.695 "memory_domains": [ 00:09:44.695 { 00:09:44.695 "dma_device_id": "system", 00:09:44.695 "dma_device_type": 1 00:09:44.695 }, 00:09:44.695 { 00:09:44.695 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:44.695 "dma_device_type": 2 00:09:44.695 } 00:09:44.695 ], 00:09:44.695 "driver_specific": {} 00:09:44.695 } 00:09:44.695 ]' 00:09:44.695 13:29:43 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:09:44.695 13:29:43 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:09:44.695 13:29:43 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:09:44.695 13:29:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.695 13:29:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:44.695 [2024-11-20 13:29:44.003627] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:09:44.695 [2024-11-20 13:29:44.003717] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:44.695 [2024-11-20 13:29:44.003745] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:09:44.695 [2024-11-20 13:29:44.003760] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:44.695 [2024-11-20 13:29:44.006408] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:44.695 [2024-11-20 13:29:44.006603] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:09:44.695 Passthru0 00:09:44.695 13:29:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.695 13:29:44 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:09:44.695 13:29:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.695 13:29:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:44.695 13:29:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.695 13:29:44 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:09:44.695 { 00:09:44.695 "name": "Malloc2", 00:09:44.695 "aliases": [ 00:09:44.695 "3cba1386-b886-45fb-a1e1-35124f64e511" 00:09:44.695 ], 00:09:44.695 "product_name": "Malloc disk", 00:09:44.695 "block_size": 512, 00:09:44.695 "num_blocks": 16384, 00:09:44.695 "uuid": "3cba1386-b886-45fb-a1e1-35124f64e511", 00:09:44.695 "assigned_rate_limits": { 00:09:44.695 "rw_ios_per_sec": 0, 00:09:44.695 "rw_mbytes_per_sec": 0, 00:09:44.695 "r_mbytes_per_sec": 0, 00:09:44.695 "w_mbytes_per_sec": 0 00:09:44.695 }, 00:09:44.695 "claimed": true, 00:09:44.695 "claim_type": "exclusive_write", 00:09:44.695 "zoned": false, 00:09:44.695 "supported_io_types": { 00:09:44.695 "read": true, 00:09:44.695 "write": true, 00:09:44.695 "unmap": true, 00:09:44.695 "flush": true, 00:09:44.695 "reset": true, 00:09:44.695 "nvme_admin": false, 00:09:44.695 "nvme_io": false, 00:09:44.695 "nvme_io_md": false, 00:09:44.695 "write_zeroes": true, 00:09:44.695 "zcopy": true, 00:09:44.695 "get_zone_info": false, 00:09:44.695 "zone_management": false, 00:09:44.695 "zone_append": false, 00:09:44.695 "compare": false, 00:09:44.695 "compare_and_write": false, 00:09:44.695 "abort": true, 00:09:44.695 "seek_hole": false, 00:09:44.695 "seek_data": false, 00:09:44.695 "copy": true, 00:09:44.695 "nvme_iov_md": false 00:09:44.695 }, 00:09:44.695 "memory_domains": [ 00:09:44.695 { 00:09:44.695 "dma_device_id": "system", 00:09:44.695 "dma_device_type": 1 00:09:44.695 }, 00:09:44.695 { 00:09:44.695 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:44.695 "dma_device_type": 2 00:09:44.695 } 00:09:44.695 ], 00:09:44.695 "driver_specific": {} 00:09:44.695 }, 00:09:44.695 { 00:09:44.695 "name": "Passthru0", 00:09:44.695 "aliases": [ 00:09:44.695 "b45bc9e5-d8c5-57a3-85f2-f58f6e560a2b" 00:09:44.695 ], 00:09:44.695 "product_name": "passthru", 00:09:44.695 "block_size": 512, 00:09:44.695 "num_blocks": 16384, 00:09:44.695 "uuid": "b45bc9e5-d8c5-57a3-85f2-f58f6e560a2b", 00:09:44.695 "assigned_rate_limits": { 00:09:44.695 "rw_ios_per_sec": 0, 00:09:44.695 "rw_mbytes_per_sec": 0, 00:09:44.695 "r_mbytes_per_sec": 0, 00:09:44.695 "w_mbytes_per_sec": 0 00:09:44.695 }, 00:09:44.695 "claimed": false, 00:09:44.695 "zoned": false, 00:09:44.695 "supported_io_types": { 00:09:44.695 "read": true, 00:09:44.695 "write": true, 00:09:44.695 "unmap": true, 00:09:44.695 "flush": true, 00:09:44.695 "reset": true, 00:09:44.695 "nvme_admin": false, 00:09:44.695 "nvme_io": false, 00:09:44.695 "nvme_io_md": false, 00:09:44.695 "write_zeroes": true, 00:09:44.695 "zcopy": true, 00:09:44.695 "get_zone_info": false, 00:09:44.695 "zone_management": false, 00:09:44.695 "zone_append": false, 00:09:44.695 "compare": false, 00:09:44.695 "compare_and_write": false, 00:09:44.695 "abort": true, 00:09:44.695 "seek_hole": false, 00:09:44.695 "seek_data": false, 00:09:44.695 "copy": true, 00:09:44.695 "nvme_iov_md": false 00:09:44.695 }, 00:09:44.695 "memory_domains": [ 00:09:44.695 { 00:09:44.695 "dma_device_id": "system", 00:09:44.695 "dma_device_type": 1 00:09:44.695 }, 00:09:44.695 { 00:09:44.695 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:44.695 "dma_device_type": 2 00:09:44.695 } 00:09:44.695 ], 00:09:44.695 "driver_specific": { 00:09:44.695 "passthru": { 00:09:44.695 "name": "Passthru0", 00:09:44.695 "base_bdev_name": "Malloc2" 00:09:44.695 } 00:09:44.695 } 00:09:44.695 } 00:09:44.695 ]' 00:09:44.695 13:29:44 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:09:44.695 13:29:44 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:09:44.695 13:29:44 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:09:44.695 13:29:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.695 13:29:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:44.695 13:29:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.695 13:29:44 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:09:44.695 13:29:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.695 13:29:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:44.695 13:29:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.695 13:29:44 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:09:44.695 13:29:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.695 13:29:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:44.695 13:29:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.695 13:29:44 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:09:44.695 13:29:44 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:09:44.954 ************************************ 00:09:44.954 END TEST rpc_daemon_integrity 00:09:44.954 ************************************ 00:09:44.954 13:29:44 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:09:44.954 00:09:44.954 real 0m0.380s 00:09:44.954 user 0m0.211s 00:09:44.954 sys 0m0.066s 00:09:44.954 13:29:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:44.954 13:29:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:44.954 13:29:44 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:09:44.954 13:29:44 rpc -- rpc/rpc.sh@84 -- # killprocess 56670 00:09:44.954 13:29:44 rpc -- common/autotest_common.sh@954 -- # '[' -z 56670 ']' 00:09:44.954 13:29:44 rpc -- common/autotest_common.sh@958 -- # kill -0 56670 00:09:44.954 13:29:44 rpc -- common/autotest_common.sh@959 -- # uname 00:09:44.954 13:29:44 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:44.954 13:29:44 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 56670 00:09:44.954 killing process with pid 56670 00:09:44.954 13:29:44 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:44.954 13:29:44 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:44.955 13:29:44 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 56670' 00:09:44.955 13:29:44 rpc -- common/autotest_common.sh@973 -- # kill 56670 00:09:44.955 13:29:44 rpc -- common/autotest_common.sh@978 -- # wait 56670 00:09:47.488 00:09:47.488 real 0m5.428s 00:09:47.488 user 0m5.950s 00:09:47.488 sys 0m0.987s 00:09:47.488 ************************************ 00:09:47.488 END TEST rpc 00:09:47.488 ************************************ 00:09:47.488 13:29:46 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:47.488 13:29:46 rpc -- common/autotest_common.sh@10 -- # set +x 00:09:47.488 13:29:46 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:09:47.488 13:29:46 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:47.488 13:29:46 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:47.488 13:29:46 -- common/autotest_common.sh@10 -- # set +x 00:09:47.488 ************************************ 00:09:47.488 START TEST skip_rpc 00:09:47.488 ************************************ 00:09:47.488 13:29:46 skip_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:09:47.488 * Looking for test storage... 00:09:47.488 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:09:47.488 13:29:46 skip_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:47.488 13:29:46 skip_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:09:47.488 13:29:46 skip_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:47.748 13:29:47 skip_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:47.748 13:29:47 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:47.748 13:29:47 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:47.748 13:29:47 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:47.748 13:29:47 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:09:47.748 13:29:47 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:09:47.748 13:29:47 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:09:47.748 13:29:47 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:09:47.748 13:29:47 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:09:47.748 13:29:47 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:09:47.748 13:29:47 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:09:47.748 13:29:47 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:47.748 13:29:47 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:09:47.748 13:29:47 skip_rpc -- scripts/common.sh@345 -- # : 1 00:09:47.748 13:29:47 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:47.748 13:29:47 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:47.748 13:29:47 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:09:47.748 13:29:47 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:09:47.748 13:29:47 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:47.748 13:29:47 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:09:47.748 13:29:47 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:09:47.748 13:29:47 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:09:47.748 13:29:47 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:09:47.748 13:29:47 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:47.748 13:29:47 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:09:47.748 13:29:47 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:09:47.748 13:29:47 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:47.748 13:29:47 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:47.748 13:29:47 skip_rpc -- scripts/common.sh@368 -- # return 0 00:09:47.748 13:29:47 skip_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:47.748 13:29:47 skip_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:47.748 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:47.748 --rc genhtml_branch_coverage=1 00:09:47.748 --rc genhtml_function_coverage=1 00:09:47.748 --rc genhtml_legend=1 00:09:47.748 --rc geninfo_all_blocks=1 00:09:47.748 --rc geninfo_unexecuted_blocks=1 00:09:47.748 00:09:47.748 ' 00:09:47.748 13:29:47 skip_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:47.748 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:47.748 --rc genhtml_branch_coverage=1 00:09:47.748 --rc genhtml_function_coverage=1 00:09:47.748 --rc genhtml_legend=1 00:09:47.748 --rc geninfo_all_blocks=1 00:09:47.748 --rc geninfo_unexecuted_blocks=1 00:09:47.748 00:09:47.748 ' 00:09:47.748 13:29:47 skip_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:47.748 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:47.748 --rc genhtml_branch_coverage=1 00:09:47.748 --rc genhtml_function_coverage=1 00:09:47.748 --rc genhtml_legend=1 00:09:47.748 --rc geninfo_all_blocks=1 00:09:47.748 --rc geninfo_unexecuted_blocks=1 00:09:47.748 00:09:47.748 ' 00:09:47.748 13:29:47 skip_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:47.748 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:47.748 --rc genhtml_branch_coverage=1 00:09:47.748 --rc genhtml_function_coverage=1 00:09:47.748 --rc genhtml_legend=1 00:09:47.748 --rc geninfo_all_blocks=1 00:09:47.748 --rc geninfo_unexecuted_blocks=1 00:09:47.748 00:09:47.748 ' 00:09:47.749 13:29:47 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:09:47.749 13:29:47 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:09:47.749 13:29:47 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:09:47.749 13:29:47 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:47.749 13:29:47 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:47.749 13:29:47 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:47.749 ************************************ 00:09:47.749 START TEST skip_rpc 00:09:47.749 ************************************ 00:09:47.749 13:29:47 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:09:47.749 13:29:47 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=56905 00:09:47.749 13:29:47 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:09:47.749 13:29:47 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:09:47.749 13:29:47 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:09:47.749 [2024-11-20 13:29:47.166893] Starting SPDK v25.01-pre git sha1 82b85d9ca / DPDK 24.03.0 initialization... 00:09:47.749 [2024-11-20 13:29:47.167255] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56905 ] 00:09:48.008 [2024-11-20 13:29:47.348833] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:48.008 [2024-11-20 13:29:47.464414] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:53.369 13:29:52 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:09:53.369 13:29:52 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:09:53.369 13:29:52 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:09:53.370 13:29:52 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:09:53.370 13:29:52 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:53.370 13:29:52 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:09:53.370 13:29:52 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:53.370 13:29:52 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:09:53.370 13:29:52 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.370 13:29:52 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:53.370 13:29:52 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:09:53.370 13:29:52 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:09:53.370 13:29:52 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:53.370 13:29:52 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:53.370 13:29:52 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:53.370 13:29:52 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:09:53.370 13:29:52 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 56905 00:09:53.370 13:29:52 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 56905 ']' 00:09:53.370 13:29:52 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 56905 00:09:53.370 13:29:52 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:09:53.370 13:29:52 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:53.370 13:29:52 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 56905 00:09:53.370 13:29:52 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:53.370 13:29:52 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:53.370 13:29:52 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 56905' 00:09:53.370 killing process with pid 56905 00:09:53.370 13:29:52 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 56905 00:09:53.370 13:29:52 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 56905 00:09:55.274 00:09:55.274 real 0m7.488s 00:09:55.274 ************************************ 00:09:55.274 END TEST skip_rpc 00:09:55.274 ************************************ 00:09:55.274 user 0m7.010s 00:09:55.274 sys 0m0.398s 00:09:55.274 13:29:54 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:55.274 13:29:54 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:55.274 13:29:54 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:09:55.274 13:29:54 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:55.274 13:29:54 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:55.274 13:29:54 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:55.274 ************************************ 00:09:55.274 START TEST skip_rpc_with_json 00:09:55.274 ************************************ 00:09:55.274 13:29:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:09:55.274 13:29:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:09:55.274 13:29:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=57009 00:09:55.274 13:29:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:09:55.274 13:29:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:09:55.274 13:29:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 57009 00:09:55.274 13:29:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 57009 ']' 00:09:55.274 13:29:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:55.274 13:29:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:55.274 13:29:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:55.274 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:55.274 13:29:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:55.274 13:29:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:09:55.274 [2024-11-20 13:29:54.714285] Starting SPDK v25.01-pre git sha1 82b85d9ca / DPDK 24.03.0 initialization... 00:09:55.274 [2024-11-20 13:29:54.714648] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57009 ] 00:09:55.532 [2024-11-20 13:29:54.886373] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:55.532 [2024-11-20 13:29:55.002086] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:56.467 13:29:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:56.467 13:29:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:09:56.467 13:29:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:09:56.467 13:29:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.467 13:29:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:09:56.467 [2024-11-20 13:29:55.853485] nvmf_rpc.c:2706:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:09:56.467 request: 00:09:56.467 { 00:09:56.467 "trtype": "tcp", 00:09:56.467 "method": "nvmf_get_transports", 00:09:56.467 "req_id": 1 00:09:56.467 } 00:09:56.467 Got JSON-RPC error response 00:09:56.467 response: 00:09:56.467 { 00:09:56.467 "code": -19, 00:09:56.467 "message": "No such device" 00:09:56.467 } 00:09:56.467 13:29:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:09:56.467 13:29:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:09:56.467 13:29:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.467 13:29:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:09:56.467 [2024-11-20 13:29:55.869590] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:56.467 13:29:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.467 13:29:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:09:56.467 13:29:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.467 13:29:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:09:56.726 13:29:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.726 13:29:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:09:56.726 { 00:09:56.726 "subsystems": [ 00:09:56.726 { 00:09:56.726 "subsystem": "fsdev", 00:09:56.726 "config": [ 00:09:56.726 { 00:09:56.726 "method": "fsdev_set_opts", 00:09:56.726 "params": { 00:09:56.726 "fsdev_io_pool_size": 65535, 00:09:56.726 "fsdev_io_cache_size": 256 00:09:56.726 } 00:09:56.726 } 00:09:56.726 ] 00:09:56.726 }, 00:09:56.726 { 00:09:56.726 "subsystem": "keyring", 00:09:56.726 "config": [] 00:09:56.726 }, 00:09:56.726 { 00:09:56.726 "subsystem": "iobuf", 00:09:56.726 "config": [ 00:09:56.726 { 00:09:56.726 "method": "iobuf_set_options", 00:09:56.726 "params": { 00:09:56.726 "small_pool_count": 8192, 00:09:56.726 "large_pool_count": 1024, 00:09:56.726 "small_bufsize": 8192, 00:09:56.726 "large_bufsize": 135168, 00:09:56.726 "enable_numa": false 00:09:56.726 } 00:09:56.726 } 00:09:56.726 ] 00:09:56.726 }, 00:09:56.726 { 00:09:56.726 "subsystem": "sock", 00:09:56.726 "config": [ 00:09:56.726 { 00:09:56.726 "method": "sock_set_default_impl", 00:09:56.726 "params": { 00:09:56.726 "impl_name": "posix" 00:09:56.726 } 00:09:56.726 }, 00:09:56.726 { 00:09:56.726 "method": "sock_impl_set_options", 00:09:56.726 "params": { 00:09:56.726 "impl_name": "ssl", 00:09:56.726 "recv_buf_size": 4096, 00:09:56.726 "send_buf_size": 4096, 00:09:56.726 "enable_recv_pipe": true, 00:09:56.726 "enable_quickack": false, 00:09:56.726 "enable_placement_id": 0, 00:09:56.726 "enable_zerocopy_send_server": true, 00:09:56.726 "enable_zerocopy_send_client": false, 00:09:56.726 "zerocopy_threshold": 0, 00:09:56.726 "tls_version": 0, 00:09:56.726 "enable_ktls": false 00:09:56.726 } 00:09:56.726 }, 00:09:56.726 { 00:09:56.726 "method": "sock_impl_set_options", 00:09:56.726 "params": { 00:09:56.726 "impl_name": "posix", 00:09:56.726 "recv_buf_size": 2097152, 00:09:56.726 "send_buf_size": 2097152, 00:09:56.726 "enable_recv_pipe": true, 00:09:56.726 "enable_quickack": false, 00:09:56.726 "enable_placement_id": 0, 00:09:56.726 "enable_zerocopy_send_server": true, 00:09:56.726 "enable_zerocopy_send_client": false, 00:09:56.726 "zerocopy_threshold": 0, 00:09:56.726 "tls_version": 0, 00:09:56.726 "enable_ktls": false 00:09:56.726 } 00:09:56.726 } 00:09:56.726 ] 00:09:56.726 }, 00:09:56.726 { 00:09:56.726 "subsystem": "vmd", 00:09:56.726 "config": [] 00:09:56.726 }, 00:09:56.726 { 00:09:56.726 "subsystem": "accel", 00:09:56.726 "config": [ 00:09:56.726 { 00:09:56.726 "method": "accel_set_options", 00:09:56.726 "params": { 00:09:56.726 "small_cache_size": 128, 00:09:56.726 "large_cache_size": 16, 00:09:56.726 "task_count": 2048, 00:09:56.726 "sequence_count": 2048, 00:09:56.727 "buf_count": 2048 00:09:56.727 } 00:09:56.727 } 00:09:56.727 ] 00:09:56.727 }, 00:09:56.727 { 00:09:56.727 "subsystem": "bdev", 00:09:56.727 "config": [ 00:09:56.727 { 00:09:56.727 "method": "bdev_set_options", 00:09:56.727 "params": { 00:09:56.727 "bdev_io_pool_size": 65535, 00:09:56.727 "bdev_io_cache_size": 256, 00:09:56.727 "bdev_auto_examine": true, 00:09:56.727 "iobuf_small_cache_size": 128, 00:09:56.727 "iobuf_large_cache_size": 16 00:09:56.727 } 00:09:56.727 }, 00:09:56.727 { 00:09:56.727 "method": "bdev_raid_set_options", 00:09:56.727 "params": { 00:09:56.727 "process_window_size_kb": 1024, 00:09:56.727 "process_max_bandwidth_mb_sec": 0 00:09:56.727 } 00:09:56.727 }, 00:09:56.727 { 00:09:56.727 "method": "bdev_iscsi_set_options", 00:09:56.727 "params": { 00:09:56.727 "timeout_sec": 30 00:09:56.727 } 00:09:56.727 }, 00:09:56.727 { 00:09:56.727 "method": "bdev_nvme_set_options", 00:09:56.727 "params": { 00:09:56.727 "action_on_timeout": "none", 00:09:56.727 "timeout_us": 0, 00:09:56.727 "timeout_admin_us": 0, 00:09:56.727 "keep_alive_timeout_ms": 10000, 00:09:56.727 "arbitration_burst": 0, 00:09:56.727 "low_priority_weight": 0, 00:09:56.727 "medium_priority_weight": 0, 00:09:56.727 "high_priority_weight": 0, 00:09:56.727 "nvme_adminq_poll_period_us": 10000, 00:09:56.727 "nvme_ioq_poll_period_us": 0, 00:09:56.727 "io_queue_requests": 0, 00:09:56.727 "delay_cmd_submit": true, 00:09:56.727 "transport_retry_count": 4, 00:09:56.727 "bdev_retry_count": 3, 00:09:56.727 "transport_ack_timeout": 0, 00:09:56.727 "ctrlr_loss_timeout_sec": 0, 00:09:56.727 "reconnect_delay_sec": 0, 00:09:56.727 "fast_io_fail_timeout_sec": 0, 00:09:56.727 "disable_auto_failback": false, 00:09:56.727 "generate_uuids": false, 00:09:56.727 "transport_tos": 0, 00:09:56.727 "nvme_error_stat": false, 00:09:56.727 "rdma_srq_size": 0, 00:09:56.727 "io_path_stat": false, 00:09:56.727 "allow_accel_sequence": false, 00:09:56.727 "rdma_max_cq_size": 0, 00:09:56.727 "rdma_cm_event_timeout_ms": 0, 00:09:56.727 "dhchap_digests": [ 00:09:56.727 "sha256", 00:09:56.727 "sha384", 00:09:56.727 "sha512" 00:09:56.727 ], 00:09:56.727 "dhchap_dhgroups": [ 00:09:56.727 "null", 00:09:56.727 "ffdhe2048", 00:09:56.727 "ffdhe3072", 00:09:56.727 "ffdhe4096", 00:09:56.727 "ffdhe6144", 00:09:56.727 "ffdhe8192" 00:09:56.727 ] 00:09:56.727 } 00:09:56.727 }, 00:09:56.727 { 00:09:56.727 "method": "bdev_nvme_set_hotplug", 00:09:56.727 "params": { 00:09:56.727 "period_us": 100000, 00:09:56.727 "enable": false 00:09:56.727 } 00:09:56.727 }, 00:09:56.727 { 00:09:56.727 "method": "bdev_wait_for_examine" 00:09:56.727 } 00:09:56.727 ] 00:09:56.727 }, 00:09:56.727 { 00:09:56.727 "subsystem": "scsi", 00:09:56.727 "config": null 00:09:56.727 }, 00:09:56.727 { 00:09:56.727 "subsystem": "scheduler", 00:09:56.727 "config": [ 00:09:56.727 { 00:09:56.727 "method": "framework_set_scheduler", 00:09:56.727 "params": { 00:09:56.727 "name": "static" 00:09:56.727 } 00:09:56.727 } 00:09:56.727 ] 00:09:56.727 }, 00:09:56.727 { 00:09:56.727 "subsystem": "vhost_scsi", 00:09:56.727 "config": [] 00:09:56.727 }, 00:09:56.727 { 00:09:56.727 "subsystem": "vhost_blk", 00:09:56.727 "config": [] 00:09:56.727 }, 00:09:56.727 { 00:09:56.727 "subsystem": "ublk", 00:09:56.727 "config": [] 00:09:56.727 }, 00:09:56.727 { 00:09:56.727 "subsystem": "nbd", 00:09:56.727 "config": [] 00:09:56.727 }, 00:09:56.727 { 00:09:56.727 "subsystem": "nvmf", 00:09:56.727 "config": [ 00:09:56.727 { 00:09:56.727 "method": "nvmf_set_config", 00:09:56.727 "params": { 00:09:56.727 "discovery_filter": "match_any", 00:09:56.727 "admin_cmd_passthru": { 00:09:56.727 "identify_ctrlr": false 00:09:56.727 }, 00:09:56.727 "dhchap_digests": [ 00:09:56.727 "sha256", 00:09:56.727 "sha384", 00:09:56.727 "sha512" 00:09:56.727 ], 00:09:56.727 "dhchap_dhgroups": [ 00:09:56.727 "null", 00:09:56.727 "ffdhe2048", 00:09:56.727 "ffdhe3072", 00:09:56.727 "ffdhe4096", 00:09:56.727 "ffdhe6144", 00:09:56.727 "ffdhe8192" 00:09:56.727 ] 00:09:56.727 } 00:09:56.727 }, 00:09:56.727 { 00:09:56.727 "method": "nvmf_set_max_subsystems", 00:09:56.727 "params": { 00:09:56.727 "max_subsystems": 1024 00:09:56.727 } 00:09:56.727 }, 00:09:56.727 { 00:09:56.727 "method": "nvmf_set_crdt", 00:09:56.727 "params": { 00:09:56.727 "crdt1": 0, 00:09:56.727 "crdt2": 0, 00:09:56.727 "crdt3": 0 00:09:56.727 } 00:09:56.727 }, 00:09:56.727 { 00:09:56.727 "method": "nvmf_create_transport", 00:09:56.727 "params": { 00:09:56.727 "trtype": "TCP", 00:09:56.727 "max_queue_depth": 128, 00:09:56.727 "max_io_qpairs_per_ctrlr": 127, 00:09:56.727 "in_capsule_data_size": 4096, 00:09:56.727 "max_io_size": 131072, 00:09:56.727 "io_unit_size": 131072, 00:09:56.727 "max_aq_depth": 128, 00:09:56.727 "num_shared_buffers": 511, 00:09:56.727 "buf_cache_size": 4294967295, 00:09:56.727 "dif_insert_or_strip": false, 00:09:56.727 "zcopy": false, 00:09:56.727 "c2h_success": true, 00:09:56.727 "sock_priority": 0, 00:09:56.727 "abort_timeout_sec": 1, 00:09:56.727 "ack_timeout": 0, 00:09:56.727 "data_wr_pool_size": 0 00:09:56.727 } 00:09:56.727 } 00:09:56.727 ] 00:09:56.727 }, 00:09:56.727 { 00:09:56.727 "subsystem": "iscsi", 00:09:56.727 "config": [ 00:09:56.727 { 00:09:56.727 "method": "iscsi_set_options", 00:09:56.727 "params": { 00:09:56.727 "node_base": "iqn.2016-06.io.spdk", 00:09:56.727 "max_sessions": 128, 00:09:56.727 "max_connections_per_session": 2, 00:09:56.727 "max_queue_depth": 64, 00:09:56.727 "default_time2wait": 2, 00:09:56.727 "default_time2retain": 20, 00:09:56.727 "first_burst_length": 8192, 00:09:56.727 "immediate_data": true, 00:09:56.727 "allow_duplicated_isid": false, 00:09:56.727 "error_recovery_level": 0, 00:09:56.727 "nop_timeout": 60, 00:09:56.727 "nop_in_interval": 30, 00:09:56.727 "disable_chap": false, 00:09:56.727 "require_chap": false, 00:09:56.727 "mutual_chap": false, 00:09:56.727 "chap_group": 0, 00:09:56.727 "max_large_datain_per_connection": 64, 00:09:56.727 "max_r2t_per_connection": 4, 00:09:56.727 "pdu_pool_size": 36864, 00:09:56.727 "immediate_data_pool_size": 16384, 00:09:56.727 "data_out_pool_size": 2048 00:09:56.727 } 00:09:56.727 } 00:09:56.727 ] 00:09:56.727 } 00:09:56.727 ] 00:09:56.727 } 00:09:56.727 13:29:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:09:56.727 13:29:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 57009 00:09:56.727 13:29:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 57009 ']' 00:09:56.727 13:29:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 57009 00:09:56.727 13:29:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:09:56.727 13:29:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:56.727 13:29:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57009 00:09:56.727 killing process with pid 57009 00:09:56.727 13:29:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:56.727 13:29:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:56.727 13:29:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57009' 00:09:56.727 13:29:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 57009 00:09:56.727 13:29:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 57009 00:09:59.261 13:29:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=57065 00:09:59.261 13:29:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:09:59.261 13:29:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:10:04.527 13:30:03 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 57065 00:10:04.527 13:30:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 57065 ']' 00:10:04.527 13:30:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 57065 00:10:04.527 13:30:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:10:04.527 13:30:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:04.527 13:30:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57065 00:10:04.527 killing process with pid 57065 00:10:04.527 13:30:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:04.527 13:30:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:04.527 13:30:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57065' 00:10:04.527 13:30:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 57065 00:10:04.527 13:30:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 57065 00:10:07.059 13:30:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:10:07.059 13:30:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:10:07.059 00:10:07.059 real 0m11.424s 00:10:07.059 user 0m10.857s 00:10:07.059 sys 0m0.902s 00:10:07.059 13:30:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:07.059 ************************************ 00:10:07.059 END TEST skip_rpc_with_json 00:10:07.059 ************************************ 00:10:07.059 13:30:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:10:07.059 13:30:06 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:10:07.059 13:30:06 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:07.059 13:30:06 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:07.059 13:30:06 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:07.059 ************************************ 00:10:07.059 START TEST skip_rpc_with_delay 00:10:07.059 ************************************ 00:10:07.059 13:30:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:10:07.059 13:30:06 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:10:07.059 13:30:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:10:07.059 13:30:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:10:07.059 13:30:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:10:07.059 13:30:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:07.059 13:30:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:10:07.059 13:30:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:07.059 13:30:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:10:07.059 13:30:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:07.059 13:30:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:10:07.059 13:30:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:10:07.059 13:30:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:10:07.059 [2024-11-20 13:30:06.220270] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:10:07.059 13:30:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:10:07.059 13:30:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:07.059 13:30:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:07.059 13:30:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:07.059 00:10:07.059 real 0m0.189s 00:10:07.059 user 0m0.088s 00:10:07.059 sys 0m0.099s 00:10:07.059 13:30:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:07.059 13:30:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:10:07.059 ************************************ 00:10:07.059 END TEST skip_rpc_with_delay 00:10:07.059 ************************************ 00:10:07.059 13:30:06 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:10:07.059 13:30:06 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:10:07.059 13:30:06 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:10:07.059 13:30:06 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:07.059 13:30:06 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:07.059 13:30:06 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:07.059 ************************************ 00:10:07.059 START TEST exit_on_failed_rpc_init 00:10:07.059 ************************************ 00:10:07.059 13:30:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:10:07.059 13:30:06 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=57204 00:10:07.059 13:30:06 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:10:07.059 13:30:06 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 57204 00:10:07.059 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:07.059 13:30:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 57204 ']' 00:10:07.059 13:30:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:07.059 13:30:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:07.059 13:30:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:07.059 13:30:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:07.059 13:30:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:10:07.059 [2024-11-20 13:30:06.475689] Starting SPDK v25.01-pre git sha1 82b85d9ca / DPDK 24.03.0 initialization... 00:10:07.059 [2024-11-20 13:30:06.476027] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57204 ] 00:10:07.315 [2024-11-20 13:30:06.656918] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:07.315 [2024-11-20 13:30:06.776291] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:08.247 13:30:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:08.247 13:30:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:10:08.247 13:30:07 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:10:08.247 13:30:07 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:10:08.247 13:30:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:10:08.247 13:30:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:10:08.247 13:30:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:10:08.247 13:30:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:08.247 13:30:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:10:08.247 13:30:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:08.247 13:30:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:10:08.247 13:30:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:08.247 13:30:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:10:08.247 13:30:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:10:08.247 13:30:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:10:08.506 [2024-11-20 13:30:07.819928] Starting SPDK v25.01-pre git sha1 82b85d9ca / DPDK 24.03.0 initialization... 00:10:08.506 [2024-11-20 13:30:07.820283] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57222 ] 00:10:08.764 [2024-11-20 13:30:07.998869] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:08.764 [2024-11-20 13:30:08.121686] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:08.764 [2024-11-20 13:30:08.122045] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:10:08.765 [2024-11-20 13:30:08.122081] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:10:08.765 [2024-11-20 13:30:08.122096] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:09.023 13:30:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:10:09.023 13:30:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:09.023 13:30:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:10:09.023 13:30:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:10:09.023 13:30:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:10:09.023 13:30:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:09.023 13:30:08 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:10:09.023 13:30:08 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 57204 00:10:09.023 13:30:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 57204 ']' 00:10:09.023 13:30:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 57204 00:10:09.024 13:30:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:10:09.024 13:30:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:09.024 13:30:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57204 00:10:09.024 13:30:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:09.024 13:30:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:09.024 13:30:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57204' 00:10:09.024 killing process with pid 57204 00:10:09.024 13:30:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 57204 00:10:09.024 13:30:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 57204 00:10:11.556 00:10:11.556 real 0m4.468s 00:10:11.556 user 0m4.816s 00:10:11.556 sys 0m0.624s 00:10:11.556 13:30:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:11.556 ************************************ 00:10:11.556 END TEST exit_on_failed_rpc_init 00:10:11.556 ************************************ 00:10:11.556 13:30:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:10:11.556 13:30:10 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:10:11.556 ************************************ 00:10:11.556 END TEST skip_rpc 00:10:11.556 ************************************ 00:10:11.556 00:10:11.556 real 0m24.086s 00:10:11.556 user 0m22.987s 00:10:11.556 sys 0m2.337s 00:10:11.556 13:30:10 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:11.556 13:30:10 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:11.556 13:30:10 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:10:11.556 13:30:10 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:11.556 13:30:10 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:11.556 13:30:10 -- common/autotest_common.sh@10 -- # set +x 00:10:11.556 ************************************ 00:10:11.556 START TEST rpc_client 00:10:11.556 ************************************ 00:10:11.556 13:30:10 rpc_client -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:10:11.815 * Looking for test storage... 00:10:11.815 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:10:11.815 13:30:11 rpc_client -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:11.815 13:30:11 rpc_client -- common/autotest_common.sh@1693 -- # lcov --version 00:10:11.815 13:30:11 rpc_client -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:11.815 13:30:11 rpc_client -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:11.815 13:30:11 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:11.815 13:30:11 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:11.815 13:30:11 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:11.815 13:30:11 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:10:11.815 13:30:11 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:10:11.815 13:30:11 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:10:11.815 13:30:11 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:10:11.815 13:30:11 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:10:11.815 13:30:11 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:10:11.815 13:30:11 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:10:11.815 13:30:11 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:11.815 13:30:11 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:10:11.815 13:30:11 rpc_client -- scripts/common.sh@345 -- # : 1 00:10:11.815 13:30:11 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:11.815 13:30:11 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:11.815 13:30:11 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:10:11.816 13:30:11 rpc_client -- scripts/common.sh@353 -- # local d=1 00:10:11.816 13:30:11 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:11.816 13:30:11 rpc_client -- scripts/common.sh@355 -- # echo 1 00:10:11.816 13:30:11 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:10:11.816 13:30:11 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:10:11.816 13:30:11 rpc_client -- scripts/common.sh@353 -- # local d=2 00:10:11.816 13:30:11 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:11.816 13:30:11 rpc_client -- scripts/common.sh@355 -- # echo 2 00:10:11.816 13:30:11 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:10:11.816 13:30:11 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:11.816 13:30:11 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:11.816 13:30:11 rpc_client -- scripts/common.sh@368 -- # return 0 00:10:11.816 13:30:11 rpc_client -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:11.816 13:30:11 rpc_client -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:11.816 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:11.816 --rc genhtml_branch_coverage=1 00:10:11.816 --rc genhtml_function_coverage=1 00:10:11.816 --rc genhtml_legend=1 00:10:11.816 --rc geninfo_all_blocks=1 00:10:11.816 --rc geninfo_unexecuted_blocks=1 00:10:11.816 00:10:11.816 ' 00:10:11.816 13:30:11 rpc_client -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:11.816 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:11.816 --rc genhtml_branch_coverage=1 00:10:11.816 --rc genhtml_function_coverage=1 00:10:11.816 --rc genhtml_legend=1 00:10:11.816 --rc geninfo_all_blocks=1 00:10:11.816 --rc geninfo_unexecuted_blocks=1 00:10:11.816 00:10:11.816 ' 00:10:11.816 13:30:11 rpc_client -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:11.816 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:11.816 --rc genhtml_branch_coverage=1 00:10:11.816 --rc genhtml_function_coverage=1 00:10:11.816 --rc genhtml_legend=1 00:10:11.816 --rc geninfo_all_blocks=1 00:10:11.816 --rc geninfo_unexecuted_blocks=1 00:10:11.816 00:10:11.816 ' 00:10:11.816 13:30:11 rpc_client -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:11.816 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:11.816 --rc genhtml_branch_coverage=1 00:10:11.816 --rc genhtml_function_coverage=1 00:10:11.816 --rc genhtml_legend=1 00:10:11.816 --rc geninfo_all_blocks=1 00:10:11.816 --rc geninfo_unexecuted_blocks=1 00:10:11.816 00:10:11.816 ' 00:10:11.816 13:30:11 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:10:11.816 OK 00:10:11.816 13:30:11 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:10:11.816 00:10:11.816 real 0m0.304s 00:10:11.816 user 0m0.166s 00:10:11.816 sys 0m0.158s 00:10:11.816 13:30:11 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:11.816 13:30:11 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:10:11.816 ************************************ 00:10:11.816 END TEST rpc_client 00:10:11.816 ************************************ 00:10:12.073 13:30:11 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:10:12.073 13:30:11 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:12.073 13:30:11 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:12.073 13:30:11 -- common/autotest_common.sh@10 -- # set +x 00:10:12.073 ************************************ 00:10:12.073 START TEST json_config 00:10:12.073 ************************************ 00:10:12.073 13:30:11 json_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:10:12.073 13:30:11 json_config -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:12.073 13:30:11 json_config -- common/autotest_common.sh@1693 -- # lcov --version 00:10:12.073 13:30:11 json_config -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:12.073 13:30:11 json_config -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:12.073 13:30:11 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:12.073 13:30:11 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:12.073 13:30:11 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:12.073 13:30:11 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:10:12.073 13:30:11 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:10:12.073 13:30:11 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:10:12.073 13:30:11 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:10:12.073 13:30:11 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:10:12.073 13:30:11 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:10:12.073 13:30:11 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:10:12.073 13:30:11 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:12.073 13:30:11 json_config -- scripts/common.sh@344 -- # case "$op" in 00:10:12.073 13:30:11 json_config -- scripts/common.sh@345 -- # : 1 00:10:12.073 13:30:11 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:12.073 13:30:11 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:12.073 13:30:11 json_config -- scripts/common.sh@365 -- # decimal 1 00:10:12.073 13:30:11 json_config -- scripts/common.sh@353 -- # local d=1 00:10:12.073 13:30:11 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:12.073 13:30:11 json_config -- scripts/common.sh@355 -- # echo 1 00:10:12.073 13:30:11 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:10:12.073 13:30:11 json_config -- scripts/common.sh@366 -- # decimal 2 00:10:12.073 13:30:11 json_config -- scripts/common.sh@353 -- # local d=2 00:10:12.073 13:30:11 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:12.073 13:30:11 json_config -- scripts/common.sh@355 -- # echo 2 00:10:12.073 13:30:11 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:10:12.073 13:30:11 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:12.073 13:30:11 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:12.073 13:30:11 json_config -- scripts/common.sh@368 -- # return 0 00:10:12.073 13:30:11 json_config -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:12.073 13:30:11 json_config -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:12.073 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:12.073 --rc genhtml_branch_coverage=1 00:10:12.073 --rc genhtml_function_coverage=1 00:10:12.073 --rc genhtml_legend=1 00:10:12.073 --rc geninfo_all_blocks=1 00:10:12.073 --rc geninfo_unexecuted_blocks=1 00:10:12.073 00:10:12.073 ' 00:10:12.073 13:30:11 json_config -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:12.073 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:12.073 --rc genhtml_branch_coverage=1 00:10:12.073 --rc genhtml_function_coverage=1 00:10:12.073 --rc genhtml_legend=1 00:10:12.073 --rc geninfo_all_blocks=1 00:10:12.073 --rc geninfo_unexecuted_blocks=1 00:10:12.073 00:10:12.073 ' 00:10:12.073 13:30:11 json_config -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:12.073 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:12.073 --rc genhtml_branch_coverage=1 00:10:12.073 --rc genhtml_function_coverage=1 00:10:12.073 --rc genhtml_legend=1 00:10:12.073 --rc geninfo_all_blocks=1 00:10:12.073 --rc geninfo_unexecuted_blocks=1 00:10:12.073 00:10:12.073 ' 00:10:12.073 13:30:11 json_config -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:12.073 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:12.073 --rc genhtml_branch_coverage=1 00:10:12.073 --rc genhtml_function_coverage=1 00:10:12.073 --rc genhtml_legend=1 00:10:12.073 --rc geninfo_all_blocks=1 00:10:12.073 --rc geninfo_unexecuted_blocks=1 00:10:12.073 00:10:12.074 ' 00:10:12.074 13:30:11 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:12.074 13:30:11 json_config -- nvmf/common.sh@7 -- # uname -s 00:10:12.074 13:30:11 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:12.074 13:30:11 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:12.074 13:30:11 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:12.074 13:30:11 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:12.074 13:30:11 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:12.074 13:30:11 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:12.074 13:30:11 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:12.074 13:30:11 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:12.074 13:30:11 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:12.074 13:30:11 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:12.332 13:30:11 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:38e61dd9-7663-487a-9216-d82314e42e23 00:10:12.332 13:30:11 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=38e61dd9-7663-487a-9216-d82314e42e23 00:10:12.332 13:30:11 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:12.332 13:30:11 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:12.332 13:30:11 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:10:12.332 13:30:11 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:12.332 13:30:11 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:12.332 13:30:11 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:10:12.332 13:30:11 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:12.332 13:30:11 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:12.332 13:30:11 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:12.332 13:30:11 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:12.332 13:30:11 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:12.332 13:30:11 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:12.332 13:30:11 json_config -- paths/export.sh@5 -- # export PATH 00:10:12.332 13:30:11 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:12.332 13:30:11 json_config -- nvmf/common.sh@51 -- # : 0 00:10:12.332 13:30:11 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:12.332 13:30:11 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:12.332 13:30:11 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:12.332 13:30:11 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:12.332 13:30:11 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:12.332 13:30:11 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:12.332 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:12.332 13:30:11 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:12.332 13:30:11 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:12.332 13:30:11 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:12.332 13:30:11 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:10:12.332 13:30:11 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:10:12.332 13:30:11 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:10:12.332 13:30:11 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:10:12.332 13:30:11 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:10:12.332 WARNING: No tests are enabled so not running JSON configuration tests 00:10:12.332 13:30:11 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:10:12.332 13:30:11 json_config -- json_config/json_config.sh@28 -- # exit 0 00:10:12.332 00:10:12.332 real 0m0.249s 00:10:12.332 user 0m0.153s 00:10:12.332 sys 0m0.107s 00:10:12.332 13:30:11 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:12.332 13:30:11 json_config -- common/autotest_common.sh@10 -- # set +x 00:10:12.332 ************************************ 00:10:12.332 END TEST json_config 00:10:12.332 ************************************ 00:10:12.332 13:30:11 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:10:12.333 13:30:11 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:12.333 13:30:11 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:12.333 13:30:11 -- common/autotest_common.sh@10 -- # set +x 00:10:12.333 ************************************ 00:10:12.333 START TEST json_config_extra_key 00:10:12.333 ************************************ 00:10:12.333 13:30:11 json_config_extra_key -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:10:12.333 13:30:11 json_config_extra_key -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:12.333 13:30:11 json_config_extra_key -- common/autotest_common.sh@1693 -- # lcov --version 00:10:12.333 13:30:11 json_config_extra_key -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:12.333 13:30:11 json_config_extra_key -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:12.333 13:30:11 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:12.333 13:30:11 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:12.333 13:30:11 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:12.333 13:30:11 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:10:12.333 13:30:11 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:10:12.333 13:30:11 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:10:12.333 13:30:11 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:10:12.333 13:30:11 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:10:12.333 13:30:11 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:10:12.333 13:30:11 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:10:12.333 13:30:11 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:12.333 13:30:11 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:10:12.333 13:30:11 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:10:12.333 13:30:11 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:12.333 13:30:11 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:12.333 13:30:11 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:10:12.333 13:30:11 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:10:12.333 13:30:11 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:12.593 13:30:11 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:10:12.593 13:30:11 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:10:12.593 13:30:11 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:10:12.593 13:30:11 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:10:12.593 13:30:11 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:12.593 13:30:11 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:10:12.593 13:30:11 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:10:12.593 13:30:11 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:12.593 13:30:11 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:12.593 13:30:11 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:10:12.593 13:30:11 json_config_extra_key -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:12.593 13:30:11 json_config_extra_key -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:12.593 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:12.593 --rc genhtml_branch_coverage=1 00:10:12.593 --rc genhtml_function_coverage=1 00:10:12.593 --rc genhtml_legend=1 00:10:12.593 --rc geninfo_all_blocks=1 00:10:12.593 --rc geninfo_unexecuted_blocks=1 00:10:12.593 00:10:12.593 ' 00:10:12.593 13:30:11 json_config_extra_key -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:12.593 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:12.593 --rc genhtml_branch_coverage=1 00:10:12.593 --rc genhtml_function_coverage=1 00:10:12.593 --rc genhtml_legend=1 00:10:12.593 --rc geninfo_all_blocks=1 00:10:12.593 --rc geninfo_unexecuted_blocks=1 00:10:12.593 00:10:12.593 ' 00:10:12.593 13:30:11 json_config_extra_key -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:12.593 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:12.593 --rc genhtml_branch_coverage=1 00:10:12.593 --rc genhtml_function_coverage=1 00:10:12.593 --rc genhtml_legend=1 00:10:12.593 --rc geninfo_all_blocks=1 00:10:12.593 --rc geninfo_unexecuted_blocks=1 00:10:12.593 00:10:12.593 ' 00:10:12.593 13:30:11 json_config_extra_key -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:12.593 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:12.593 --rc genhtml_branch_coverage=1 00:10:12.593 --rc genhtml_function_coverage=1 00:10:12.593 --rc genhtml_legend=1 00:10:12.593 --rc geninfo_all_blocks=1 00:10:12.593 --rc geninfo_unexecuted_blocks=1 00:10:12.593 00:10:12.593 ' 00:10:12.593 13:30:11 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:12.593 13:30:11 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:10:12.593 13:30:11 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:12.593 13:30:11 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:12.593 13:30:11 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:12.593 13:30:11 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:12.593 13:30:11 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:12.593 13:30:11 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:12.593 13:30:11 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:12.593 13:30:11 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:12.593 13:30:11 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:12.593 13:30:11 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:12.593 13:30:11 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:38e61dd9-7663-487a-9216-d82314e42e23 00:10:12.593 13:30:11 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=38e61dd9-7663-487a-9216-d82314e42e23 00:10:12.593 13:30:11 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:12.593 13:30:11 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:12.593 13:30:11 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:10:12.593 13:30:11 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:12.593 13:30:11 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:12.593 13:30:11 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:10:12.593 13:30:11 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:12.593 13:30:11 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:12.593 13:30:11 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:12.594 13:30:11 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:12.594 13:30:11 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:12.594 13:30:11 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:12.594 13:30:11 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:10:12.594 13:30:11 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:12.594 13:30:11 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:10:12.594 13:30:11 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:12.594 13:30:11 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:12.594 13:30:11 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:12.594 13:30:11 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:12.594 13:30:11 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:12.594 13:30:11 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:12.594 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:12.594 13:30:11 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:12.594 13:30:11 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:12.594 13:30:11 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:12.594 13:30:11 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:10:12.594 13:30:11 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:10:12.594 13:30:11 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:10:12.594 13:30:11 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:10:12.594 13:30:11 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:10:12.594 13:30:11 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:10:12.594 13:30:11 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:10:12.594 13:30:11 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:10:12.594 13:30:11 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:10:12.594 13:30:11 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:10:12.594 INFO: launching applications... 00:10:12.594 13:30:11 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:10:12.594 13:30:11 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:10:12.594 13:30:11 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:10:12.594 13:30:11 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:10:12.594 13:30:11 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:10:12.594 13:30:11 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:10:12.594 13:30:11 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:10:12.594 13:30:11 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:10:12.594 13:30:11 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:10:12.594 13:30:11 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=57432 00:10:12.594 Waiting for target to run... 00:10:12.594 13:30:11 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:10:12.594 13:30:11 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 57432 /var/tmp/spdk_tgt.sock 00:10:12.594 13:30:11 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 57432 ']' 00:10:12.594 13:30:11 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:10:12.594 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:10:12.594 13:30:11 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:12.594 13:30:11 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:10:12.594 13:30:11 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:10:12.594 13:30:11 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:12.594 13:30:11 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:10:12.594 [2024-11-20 13:30:11.980810] Starting SPDK v25.01-pre git sha1 82b85d9ca / DPDK 24.03.0 initialization... 00:10:12.594 [2024-11-20 13:30:11.980939] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57432 ] 00:10:13.162 [2024-11-20 13:30:12.379337] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:13.162 [2024-11-20 13:30:12.484031] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:14.097 13:30:13 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:14.097 13:30:13 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:10:14.097 00:10:14.097 13:30:13 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:10:14.097 INFO: shutting down applications... 00:10:14.097 13:30:13 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:10:14.097 13:30:13 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:10:14.097 13:30:13 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:10:14.097 13:30:13 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:10:14.097 13:30:13 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 57432 ]] 00:10:14.097 13:30:13 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 57432 00:10:14.097 13:30:13 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:10:14.097 13:30:13 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:10:14.097 13:30:13 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57432 00:10:14.097 13:30:13 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:10:14.356 13:30:13 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:10:14.356 13:30:13 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:10:14.356 13:30:13 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57432 00:10:14.356 13:30:13 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:10:14.923 13:30:14 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:10:14.923 13:30:14 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:10:14.923 13:30:14 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57432 00:10:14.923 13:30:14 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:10:15.491 13:30:14 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:10:15.491 13:30:14 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:10:15.491 13:30:14 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57432 00:10:15.491 13:30:14 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:10:16.092 13:30:15 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:10:16.092 13:30:15 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:10:16.092 13:30:15 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57432 00:10:16.092 13:30:15 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:10:16.351 13:30:15 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:10:16.351 13:30:15 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:10:16.351 13:30:15 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57432 00:10:16.351 13:30:15 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:10:16.919 13:30:16 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:10:16.919 13:30:16 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:10:16.919 13:30:16 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57432 00:10:16.919 13:30:16 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:10:16.919 13:30:16 json_config_extra_key -- json_config/common.sh@43 -- # break 00:10:16.919 13:30:16 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:10:16.919 SPDK target shutdown done 00:10:16.919 13:30:16 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:10:16.919 Success 00:10:16.919 13:30:16 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:10:16.919 00:10:16.919 real 0m4.640s 00:10:16.919 user 0m4.131s 00:10:16.919 sys 0m0.598s 00:10:16.919 13:30:16 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:16.919 13:30:16 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:10:16.919 ************************************ 00:10:16.919 END TEST json_config_extra_key 00:10:16.919 ************************************ 00:10:16.919 13:30:16 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:10:16.919 13:30:16 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:16.919 13:30:16 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:16.919 13:30:16 -- common/autotest_common.sh@10 -- # set +x 00:10:16.919 ************************************ 00:10:16.919 START TEST alias_rpc 00:10:16.919 ************************************ 00:10:16.919 13:30:16 alias_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:10:17.178 * Looking for test storage... 00:10:17.178 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:10:17.178 13:30:16 alias_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:17.178 13:30:16 alias_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:10:17.178 13:30:16 alias_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:17.178 13:30:16 alias_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:17.178 13:30:16 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:17.178 13:30:16 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:17.178 13:30:16 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:17.178 13:30:16 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:10:17.178 13:30:16 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:10:17.178 13:30:16 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:10:17.178 13:30:16 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:10:17.178 13:30:16 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:10:17.178 13:30:16 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:10:17.178 13:30:16 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:10:17.178 13:30:16 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:17.178 13:30:16 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:10:17.178 13:30:16 alias_rpc -- scripts/common.sh@345 -- # : 1 00:10:17.178 13:30:16 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:17.178 13:30:16 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:17.178 13:30:16 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:10:17.179 13:30:16 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:10:17.179 13:30:16 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:17.179 13:30:16 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:10:17.179 13:30:16 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:10:17.179 13:30:16 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:10:17.179 13:30:16 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:10:17.179 13:30:16 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:17.179 13:30:16 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:10:17.179 13:30:16 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:10:17.179 13:30:16 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:17.179 13:30:16 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:17.179 13:30:16 alias_rpc -- scripts/common.sh@368 -- # return 0 00:10:17.179 13:30:16 alias_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:17.179 13:30:16 alias_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:17.179 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:17.179 --rc genhtml_branch_coverage=1 00:10:17.179 --rc genhtml_function_coverage=1 00:10:17.179 --rc genhtml_legend=1 00:10:17.179 --rc geninfo_all_blocks=1 00:10:17.179 --rc geninfo_unexecuted_blocks=1 00:10:17.179 00:10:17.179 ' 00:10:17.179 13:30:16 alias_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:17.179 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:17.179 --rc genhtml_branch_coverage=1 00:10:17.179 --rc genhtml_function_coverage=1 00:10:17.179 --rc genhtml_legend=1 00:10:17.179 --rc geninfo_all_blocks=1 00:10:17.179 --rc geninfo_unexecuted_blocks=1 00:10:17.179 00:10:17.179 ' 00:10:17.179 13:30:16 alias_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:17.179 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:17.179 --rc genhtml_branch_coverage=1 00:10:17.179 --rc genhtml_function_coverage=1 00:10:17.179 --rc genhtml_legend=1 00:10:17.179 --rc geninfo_all_blocks=1 00:10:17.179 --rc geninfo_unexecuted_blocks=1 00:10:17.179 00:10:17.179 ' 00:10:17.179 13:30:16 alias_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:17.179 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:17.179 --rc genhtml_branch_coverage=1 00:10:17.179 --rc genhtml_function_coverage=1 00:10:17.179 --rc genhtml_legend=1 00:10:17.179 --rc geninfo_all_blocks=1 00:10:17.179 --rc geninfo_unexecuted_blocks=1 00:10:17.179 00:10:17.179 ' 00:10:17.179 13:30:16 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:10:17.179 13:30:16 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=57538 00:10:17.179 13:30:16 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:10:17.179 13:30:16 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 57538 00:10:17.179 13:30:16 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 57538 ']' 00:10:17.179 13:30:16 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:17.179 13:30:16 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:17.179 13:30:16 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:17.179 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:17.179 13:30:16 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:17.179 13:30:16 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:17.437 [2024-11-20 13:30:16.704540] Starting SPDK v25.01-pre git sha1 82b85d9ca / DPDK 24.03.0 initialization... 00:10:17.437 [2024-11-20 13:30:16.704725] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57538 ] 00:10:17.437 [2024-11-20 13:30:16.895294] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:17.695 [2024-11-20 13:30:17.015681] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:18.632 13:30:17 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:18.632 13:30:17 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:10:18.632 13:30:17 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:10:18.948 13:30:18 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 57538 00:10:18.948 13:30:18 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 57538 ']' 00:10:18.948 13:30:18 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 57538 00:10:18.948 13:30:18 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:10:18.948 13:30:18 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:18.948 13:30:18 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57538 00:10:18.948 13:30:18 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:18.948 13:30:18 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:18.948 killing process with pid 57538 00:10:18.948 13:30:18 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57538' 00:10:18.948 13:30:18 alias_rpc -- common/autotest_common.sh@973 -- # kill 57538 00:10:18.948 13:30:18 alias_rpc -- common/autotest_common.sh@978 -- # wait 57538 00:10:21.481 00:10:21.481 real 0m4.338s 00:10:21.481 user 0m4.366s 00:10:21.481 sys 0m0.616s 00:10:21.481 13:30:20 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:21.481 13:30:20 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:21.481 ************************************ 00:10:21.481 END TEST alias_rpc 00:10:21.481 ************************************ 00:10:21.481 13:30:20 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:10:21.481 13:30:20 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:10:21.481 13:30:20 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:21.481 13:30:20 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:21.481 13:30:20 -- common/autotest_common.sh@10 -- # set +x 00:10:21.481 ************************************ 00:10:21.481 START TEST spdkcli_tcp 00:10:21.481 ************************************ 00:10:21.481 13:30:20 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:10:21.481 * Looking for test storage... 00:10:21.481 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:10:21.481 13:30:20 spdkcli_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:21.481 13:30:20 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:10:21.481 13:30:20 spdkcli_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:21.481 13:30:20 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:21.481 13:30:20 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:21.481 13:30:20 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:21.481 13:30:20 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:21.481 13:30:20 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:10:21.481 13:30:20 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:10:21.481 13:30:20 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:10:21.481 13:30:20 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:10:21.481 13:30:20 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:10:21.481 13:30:20 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:10:21.481 13:30:20 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:10:21.481 13:30:20 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:21.481 13:30:20 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:10:21.481 13:30:20 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:10:21.481 13:30:20 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:21.481 13:30:20 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:21.481 13:30:20 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:10:21.481 13:30:20 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:10:21.481 13:30:20 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:21.481 13:30:20 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:10:21.740 13:30:20 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:10:21.740 13:30:20 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:10:21.740 13:30:20 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:10:21.740 13:30:20 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:21.740 13:30:20 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:10:21.740 13:30:20 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:10:21.740 13:30:20 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:21.740 13:30:20 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:21.740 13:30:20 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:10:21.740 13:30:20 spdkcli_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:21.740 13:30:20 spdkcli_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:21.740 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:21.740 --rc genhtml_branch_coverage=1 00:10:21.740 --rc genhtml_function_coverage=1 00:10:21.740 --rc genhtml_legend=1 00:10:21.740 --rc geninfo_all_blocks=1 00:10:21.740 --rc geninfo_unexecuted_blocks=1 00:10:21.740 00:10:21.740 ' 00:10:21.740 13:30:20 spdkcli_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:21.740 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:21.740 --rc genhtml_branch_coverage=1 00:10:21.740 --rc genhtml_function_coverage=1 00:10:21.740 --rc genhtml_legend=1 00:10:21.740 --rc geninfo_all_blocks=1 00:10:21.740 --rc geninfo_unexecuted_blocks=1 00:10:21.740 00:10:21.740 ' 00:10:21.740 13:30:20 spdkcli_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:21.740 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:21.740 --rc genhtml_branch_coverage=1 00:10:21.740 --rc genhtml_function_coverage=1 00:10:21.741 --rc genhtml_legend=1 00:10:21.741 --rc geninfo_all_blocks=1 00:10:21.741 --rc geninfo_unexecuted_blocks=1 00:10:21.741 00:10:21.741 ' 00:10:21.741 13:30:20 spdkcli_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:21.741 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:21.741 --rc genhtml_branch_coverage=1 00:10:21.741 --rc genhtml_function_coverage=1 00:10:21.741 --rc genhtml_legend=1 00:10:21.741 --rc geninfo_all_blocks=1 00:10:21.741 --rc geninfo_unexecuted_blocks=1 00:10:21.741 00:10:21.741 ' 00:10:21.741 13:30:20 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:10:21.741 13:30:20 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:10:21.741 13:30:20 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:10:21.741 13:30:20 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:10:21.741 13:30:20 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:10:21.741 13:30:20 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:10:21.741 13:30:20 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:10:21.741 13:30:20 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:21.741 13:30:20 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:21.741 13:30:20 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=57651 00:10:21.741 13:30:20 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:10:21.741 13:30:20 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 57651 00:10:21.741 13:30:20 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 57651 ']' 00:10:21.741 13:30:20 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:21.741 13:30:20 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:21.741 13:30:20 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:21.741 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:21.741 13:30:20 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:21.741 13:30:20 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:21.741 [2024-11-20 13:30:21.094776] Starting SPDK v25.01-pre git sha1 82b85d9ca / DPDK 24.03.0 initialization... 00:10:21.741 [2024-11-20 13:30:21.095131] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57651 ] 00:10:21.999 [2024-11-20 13:30:21.277100] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:21.999 [2024-11-20 13:30:21.395333] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:21.999 [2024-11-20 13:30:21.395369] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:22.935 13:30:22 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:22.935 13:30:22 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:10:22.935 13:30:22 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:10:22.935 13:30:22 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=57673 00:10:22.935 13:30:22 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:10:23.197 [ 00:10:23.197 "bdev_malloc_delete", 00:10:23.197 "bdev_malloc_create", 00:10:23.197 "bdev_null_resize", 00:10:23.197 "bdev_null_delete", 00:10:23.197 "bdev_null_create", 00:10:23.197 "bdev_nvme_cuse_unregister", 00:10:23.197 "bdev_nvme_cuse_register", 00:10:23.197 "bdev_opal_new_user", 00:10:23.197 "bdev_opal_set_lock_state", 00:10:23.197 "bdev_opal_delete", 00:10:23.197 "bdev_opal_get_info", 00:10:23.197 "bdev_opal_create", 00:10:23.197 "bdev_nvme_opal_revert", 00:10:23.197 "bdev_nvme_opal_init", 00:10:23.197 "bdev_nvme_send_cmd", 00:10:23.197 "bdev_nvme_set_keys", 00:10:23.197 "bdev_nvme_get_path_iostat", 00:10:23.197 "bdev_nvme_get_mdns_discovery_info", 00:10:23.197 "bdev_nvme_stop_mdns_discovery", 00:10:23.197 "bdev_nvme_start_mdns_discovery", 00:10:23.197 "bdev_nvme_set_multipath_policy", 00:10:23.197 "bdev_nvme_set_preferred_path", 00:10:23.197 "bdev_nvme_get_io_paths", 00:10:23.197 "bdev_nvme_remove_error_injection", 00:10:23.197 "bdev_nvme_add_error_injection", 00:10:23.197 "bdev_nvme_get_discovery_info", 00:10:23.197 "bdev_nvme_stop_discovery", 00:10:23.197 "bdev_nvme_start_discovery", 00:10:23.197 "bdev_nvme_get_controller_health_info", 00:10:23.197 "bdev_nvme_disable_controller", 00:10:23.197 "bdev_nvme_enable_controller", 00:10:23.197 "bdev_nvme_reset_controller", 00:10:23.197 "bdev_nvme_get_transport_statistics", 00:10:23.197 "bdev_nvme_apply_firmware", 00:10:23.197 "bdev_nvme_detach_controller", 00:10:23.197 "bdev_nvme_get_controllers", 00:10:23.197 "bdev_nvme_attach_controller", 00:10:23.197 "bdev_nvme_set_hotplug", 00:10:23.197 "bdev_nvme_set_options", 00:10:23.197 "bdev_passthru_delete", 00:10:23.197 "bdev_passthru_create", 00:10:23.197 "bdev_lvol_set_parent_bdev", 00:10:23.197 "bdev_lvol_set_parent", 00:10:23.197 "bdev_lvol_check_shallow_copy", 00:10:23.197 "bdev_lvol_start_shallow_copy", 00:10:23.197 "bdev_lvol_grow_lvstore", 00:10:23.197 "bdev_lvol_get_lvols", 00:10:23.197 "bdev_lvol_get_lvstores", 00:10:23.197 "bdev_lvol_delete", 00:10:23.197 "bdev_lvol_set_read_only", 00:10:23.197 "bdev_lvol_resize", 00:10:23.197 "bdev_lvol_decouple_parent", 00:10:23.197 "bdev_lvol_inflate", 00:10:23.197 "bdev_lvol_rename", 00:10:23.197 "bdev_lvol_clone_bdev", 00:10:23.197 "bdev_lvol_clone", 00:10:23.197 "bdev_lvol_snapshot", 00:10:23.197 "bdev_lvol_create", 00:10:23.197 "bdev_lvol_delete_lvstore", 00:10:23.197 "bdev_lvol_rename_lvstore", 00:10:23.197 "bdev_lvol_create_lvstore", 00:10:23.197 "bdev_raid_set_options", 00:10:23.197 "bdev_raid_remove_base_bdev", 00:10:23.197 "bdev_raid_add_base_bdev", 00:10:23.197 "bdev_raid_delete", 00:10:23.197 "bdev_raid_create", 00:10:23.197 "bdev_raid_get_bdevs", 00:10:23.197 "bdev_error_inject_error", 00:10:23.197 "bdev_error_delete", 00:10:23.197 "bdev_error_create", 00:10:23.197 "bdev_split_delete", 00:10:23.197 "bdev_split_create", 00:10:23.197 "bdev_delay_delete", 00:10:23.197 "bdev_delay_create", 00:10:23.197 "bdev_delay_update_latency", 00:10:23.197 "bdev_zone_block_delete", 00:10:23.197 "bdev_zone_block_create", 00:10:23.197 "blobfs_create", 00:10:23.197 "blobfs_detect", 00:10:23.197 "blobfs_set_cache_size", 00:10:23.197 "bdev_aio_delete", 00:10:23.197 "bdev_aio_rescan", 00:10:23.197 "bdev_aio_create", 00:10:23.197 "bdev_ftl_set_property", 00:10:23.197 "bdev_ftl_get_properties", 00:10:23.197 "bdev_ftl_get_stats", 00:10:23.197 "bdev_ftl_unmap", 00:10:23.197 "bdev_ftl_unload", 00:10:23.197 "bdev_ftl_delete", 00:10:23.197 "bdev_ftl_load", 00:10:23.197 "bdev_ftl_create", 00:10:23.197 "bdev_virtio_attach_controller", 00:10:23.197 "bdev_virtio_scsi_get_devices", 00:10:23.197 "bdev_virtio_detach_controller", 00:10:23.197 "bdev_virtio_blk_set_hotplug", 00:10:23.197 "bdev_iscsi_delete", 00:10:23.197 "bdev_iscsi_create", 00:10:23.197 "bdev_iscsi_set_options", 00:10:23.197 "accel_error_inject_error", 00:10:23.197 "ioat_scan_accel_module", 00:10:23.197 "dsa_scan_accel_module", 00:10:23.197 "iaa_scan_accel_module", 00:10:23.197 "keyring_file_remove_key", 00:10:23.197 "keyring_file_add_key", 00:10:23.197 "keyring_linux_set_options", 00:10:23.197 "fsdev_aio_delete", 00:10:23.197 "fsdev_aio_create", 00:10:23.197 "iscsi_get_histogram", 00:10:23.197 "iscsi_enable_histogram", 00:10:23.197 "iscsi_set_options", 00:10:23.197 "iscsi_get_auth_groups", 00:10:23.197 "iscsi_auth_group_remove_secret", 00:10:23.197 "iscsi_auth_group_add_secret", 00:10:23.197 "iscsi_delete_auth_group", 00:10:23.197 "iscsi_create_auth_group", 00:10:23.197 "iscsi_set_discovery_auth", 00:10:23.197 "iscsi_get_options", 00:10:23.197 "iscsi_target_node_request_logout", 00:10:23.197 "iscsi_target_node_set_redirect", 00:10:23.197 "iscsi_target_node_set_auth", 00:10:23.197 "iscsi_target_node_add_lun", 00:10:23.197 "iscsi_get_stats", 00:10:23.197 "iscsi_get_connections", 00:10:23.197 "iscsi_portal_group_set_auth", 00:10:23.197 "iscsi_start_portal_group", 00:10:23.197 "iscsi_delete_portal_group", 00:10:23.197 "iscsi_create_portal_group", 00:10:23.197 "iscsi_get_portal_groups", 00:10:23.197 "iscsi_delete_target_node", 00:10:23.197 "iscsi_target_node_remove_pg_ig_maps", 00:10:23.197 "iscsi_target_node_add_pg_ig_maps", 00:10:23.197 "iscsi_create_target_node", 00:10:23.197 "iscsi_get_target_nodes", 00:10:23.197 "iscsi_delete_initiator_group", 00:10:23.197 "iscsi_initiator_group_remove_initiators", 00:10:23.197 "iscsi_initiator_group_add_initiators", 00:10:23.197 "iscsi_create_initiator_group", 00:10:23.198 "iscsi_get_initiator_groups", 00:10:23.198 "nvmf_set_crdt", 00:10:23.198 "nvmf_set_config", 00:10:23.198 "nvmf_set_max_subsystems", 00:10:23.198 "nvmf_stop_mdns_prr", 00:10:23.198 "nvmf_publish_mdns_prr", 00:10:23.198 "nvmf_subsystem_get_listeners", 00:10:23.198 "nvmf_subsystem_get_qpairs", 00:10:23.198 "nvmf_subsystem_get_controllers", 00:10:23.198 "nvmf_get_stats", 00:10:23.198 "nvmf_get_transports", 00:10:23.198 "nvmf_create_transport", 00:10:23.198 "nvmf_get_targets", 00:10:23.198 "nvmf_delete_target", 00:10:23.198 "nvmf_create_target", 00:10:23.198 "nvmf_subsystem_allow_any_host", 00:10:23.198 "nvmf_subsystem_set_keys", 00:10:23.198 "nvmf_subsystem_remove_host", 00:10:23.198 "nvmf_subsystem_add_host", 00:10:23.198 "nvmf_ns_remove_host", 00:10:23.198 "nvmf_ns_add_host", 00:10:23.198 "nvmf_subsystem_remove_ns", 00:10:23.198 "nvmf_subsystem_set_ns_ana_group", 00:10:23.198 "nvmf_subsystem_add_ns", 00:10:23.198 "nvmf_subsystem_listener_set_ana_state", 00:10:23.198 "nvmf_discovery_get_referrals", 00:10:23.198 "nvmf_discovery_remove_referral", 00:10:23.198 "nvmf_discovery_add_referral", 00:10:23.198 "nvmf_subsystem_remove_listener", 00:10:23.198 "nvmf_subsystem_add_listener", 00:10:23.198 "nvmf_delete_subsystem", 00:10:23.198 "nvmf_create_subsystem", 00:10:23.198 "nvmf_get_subsystems", 00:10:23.198 "env_dpdk_get_mem_stats", 00:10:23.198 "nbd_get_disks", 00:10:23.198 "nbd_stop_disk", 00:10:23.198 "nbd_start_disk", 00:10:23.198 "ublk_recover_disk", 00:10:23.198 "ublk_get_disks", 00:10:23.198 "ublk_stop_disk", 00:10:23.198 "ublk_start_disk", 00:10:23.198 "ublk_destroy_target", 00:10:23.198 "ublk_create_target", 00:10:23.198 "virtio_blk_create_transport", 00:10:23.198 "virtio_blk_get_transports", 00:10:23.198 "vhost_controller_set_coalescing", 00:10:23.198 "vhost_get_controllers", 00:10:23.198 "vhost_delete_controller", 00:10:23.198 "vhost_create_blk_controller", 00:10:23.198 "vhost_scsi_controller_remove_target", 00:10:23.198 "vhost_scsi_controller_add_target", 00:10:23.198 "vhost_start_scsi_controller", 00:10:23.198 "vhost_create_scsi_controller", 00:10:23.198 "thread_set_cpumask", 00:10:23.198 "scheduler_set_options", 00:10:23.198 "framework_get_governor", 00:10:23.198 "framework_get_scheduler", 00:10:23.198 "framework_set_scheduler", 00:10:23.198 "framework_get_reactors", 00:10:23.198 "thread_get_io_channels", 00:10:23.198 "thread_get_pollers", 00:10:23.198 "thread_get_stats", 00:10:23.198 "framework_monitor_context_switch", 00:10:23.198 "spdk_kill_instance", 00:10:23.198 "log_enable_timestamps", 00:10:23.198 "log_get_flags", 00:10:23.198 "log_clear_flag", 00:10:23.198 "log_set_flag", 00:10:23.198 "log_get_level", 00:10:23.198 "log_set_level", 00:10:23.198 "log_get_print_level", 00:10:23.198 "log_set_print_level", 00:10:23.198 "framework_enable_cpumask_locks", 00:10:23.198 "framework_disable_cpumask_locks", 00:10:23.198 "framework_wait_init", 00:10:23.198 "framework_start_init", 00:10:23.198 "scsi_get_devices", 00:10:23.198 "bdev_get_histogram", 00:10:23.198 "bdev_enable_histogram", 00:10:23.198 "bdev_set_qos_limit", 00:10:23.198 "bdev_set_qd_sampling_period", 00:10:23.198 "bdev_get_bdevs", 00:10:23.198 "bdev_reset_iostat", 00:10:23.198 "bdev_get_iostat", 00:10:23.198 "bdev_examine", 00:10:23.198 "bdev_wait_for_examine", 00:10:23.198 "bdev_set_options", 00:10:23.198 "accel_get_stats", 00:10:23.198 "accel_set_options", 00:10:23.198 "accel_set_driver", 00:10:23.198 "accel_crypto_key_destroy", 00:10:23.198 "accel_crypto_keys_get", 00:10:23.198 "accel_crypto_key_create", 00:10:23.198 "accel_assign_opc", 00:10:23.198 "accel_get_module_info", 00:10:23.198 "accel_get_opc_assignments", 00:10:23.198 "vmd_rescan", 00:10:23.198 "vmd_remove_device", 00:10:23.198 "vmd_enable", 00:10:23.198 "sock_get_default_impl", 00:10:23.198 "sock_set_default_impl", 00:10:23.198 "sock_impl_set_options", 00:10:23.198 "sock_impl_get_options", 00:10:23.198 "iobuf_get_stats", 00:10:23.198 "iobuf_set_options", 00:10:23.198 "keyring_get_keys", 00:10:23.198 "framework_get_pci_devices", 00:10:23.198 "framework_get_config", 00:10:23.198 "framework_get_subsystems", 00:10:23.198 "fsdev_set_opts", 00:10:23.198 "fsdev_get_opts", 00:10:23.198 "trace_get_info", 00:10:23.198 "trace_get_tpoint_group_mask", 00:10:23.198 "trace_disable_tpoint_group", 00:10:23.198 "trace_enable_tpoint_group", 00:10:23.198 "trace_clear_tpoint_mask", 00:10:23.198 "trace_set_tpoint_mask", 00:10:23.198 "notify_get_notifications", 00:10:23.198 "notify_get_types", 00:10:23.198 "spdk_get_version", 00:10:23.198 "rpc_get_methods" 00:10:23.198 ] 00:10:23.198 13:30:22 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:10:23.198 13:30:22 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:23.198 13:30:22 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:23.198 13:30:22 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:10:23.198 13:30:22 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 57651 00:10:23.198 13:30:22 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 57651 ']' 00:10:23.198 13:30:22 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 57651 00:10:23.198 13:30:22 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:10:23.198 13:30:22 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:23.198 13:30:22 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57651 00:10:23.198 killing process with pid 57651 00:10:23.198 13:30:22 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:23.198 13:30:22 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:23.198 13:30:22 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57651' 00:10:23.198 13:30:22 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 57651 00:10:23.198 13:30:22 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 57651 00:10:25.756 ************************************ 00:10:25.756 END TEST spdkcli_tcp 00:10:25.756 ************************************ 00:10:25.756 00:10:25.756 real 0m4.340s 00:10:25.756 user 0m7.781s 00:10:25.756 sys 0m0.659s 00:10:25.756 13:30:25 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:25.756 13:30:25 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:25.756 13:30:25 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:10:25.756 13:30:25 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:25.756 13:30:25 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:25.756 13:30:25 -- common/autotest_common.sh@10 -- # set +x 00:10:25.756 ************************************ 00:10:25.756 START TEST dpdk_mem_utility 00:10:25.756 ************************************ 00:10:25.756 13:30:25 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:10:26.014 * Looking for test storage... 00:10:26.014 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:10:26.015 13:30:25 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:26.015 13:30:25 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lcov --version 00:10:26.015 13:30:25 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:26.015 13:30:25 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:26.015 13:30:25 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:26.015 13:30:25 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:26.015 13:30:25 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:26.015 13:30:25 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:10:26.015 13:30:25 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:10:26.015 13:30:25 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:10:26.015 13:30:25 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:10:26.015 13:30:25 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:10:26.015 13:30:25 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:10:26.015 13:30:25 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:10:26.015 13:30:25 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:26.015 13:30:25 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:10:26.015 13:30:25 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:10:26.015 13:30:25 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:26.015 13:30:25 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:26.015 13:30:25 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:10:26.015 13:30:25 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:10:26.015 13:30:25 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:26.015 13:30:25 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:10:26.015 13:30:25 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:10:26.015 13:30:25 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:10:26.015 13:30:25 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:10:26.015 13:30:25 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:26.015 13:30:25 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:10:26.015 13:30:25 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:10:26.015 13:30:25 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:26.015 13:30:25 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:26.015 13:30:25 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:10:26.015 13:30:25 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:26.015 13:30:25 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:26.015 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:26.015 --rc genhtml_branch_coverage=1 00:10:26.015 --rc genhtml_function_coverage=1 00:10:26.015 --rc genhtml_legend=1 00:10:26.015 --rc geninfo_all_blocks=1 00:10:26.015 --rc geninfo_unexecuted_blocks=1 00:10:26.015 00:10:26.015 ' 00:10:26.015 13:30:25 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:26.015 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:26.015 --rc genhtml_branch_coverage=1 00:10:26.015 --rc genhtml_function_coverage=1 00:10:26.015 --rc genhtml_legend=1 00:10:26.015 --rc geninfo_all_blocks=1 00:10:26.015 --rc geninfo_unexecuted_blocks=1 00:10:26.015 00:10:26.015 ' 00:10:26.015 13:30:25 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:26.015 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:26.015 --rc genhtml_branch_coverage=1 00:10:26.015 --rc genhtml_function_coverage=1 00:10:26.015 --rc genhtml_legend=1 00:10:26.015 --rc geninfo_all_blocks=1 00:10:26.015 --rc geninfo_unexecuted_blocks=1 00:10:26.015 00:10:26.015 ' 00:10:26.015 13:30:25 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:26.015 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:26.015 --rc genhtml_branch_coverage=1 00:10:26.015 --rc genhtml_function_coverage=1 00:10:26.015 --rc genhtml_legend=1 00:10:26.015 --rc geninfo_all_blocks=1 00:10:26.015 --rc geninfo_unexecuted_blocks=1 00:10:26.015 00:10:26.015 ' 00:10:26.015 13:30:25 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:10:26.015 13:30:25 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=57778 00:10:26.015 13:30:25 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 57778 00:10:26.015 13:30:25 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 57778 ']' 00:10:26.015 13:30:25 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:10:26.015 13:30:25 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:26.015 13:30:25 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:26.015 13:30:25 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:26.015 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:26.015 13:30:25 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:26.015 13:30:25 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:10:26.273 [2024-11-20 13:30:25.513223] Starting SPDK v25.01-pre git sha1 82b85d9ca / DPDK 24.03.0 initialization... 00:10:26.273 [2024-11-20 13:30:25.513755] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57778 ] 00:10:26.273 [2024-11-20 13:30:25.695359] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:26.532 [2024-11-20 13:30:25.805846] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:27.495 13:30:26 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:27.495 13:30:26 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:10:27.495 13:30:26 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:10:27.495 13:30:26 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:10:27.495 13:30:26 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.495 13:30:26 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:10:27.495 { 00:10:27.496 "filename": "/tmp/spdk_mem_dump.txt" 00:10:27.496 } 00:10:27.496 13:30:26 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.496 13:30:26 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:10:27.496 DPDK memory size 824.000000 MiB in 1 heap(s) 00:10:27.496 1 heaps totaling size 824.000000 MiB 00:10:27.496 size: 824.000000 MiB heap id: 0 00:10:27.496 end heaps---------- 00:10:27.496 9 mempools totaling size 603.782043 MiB 00:10:27.496 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:10:27.496 size: 158.602051 MiB name: PDU_data_out_Pool 00:10:27.496 size: 100.555481 MiB name: bdev_io_57778 00:10:27.496 size: 50.003479 MiB name: msgpool_57778 00:10:27.496 size: 36.509338 MiB name: fsdev_io_57778 00:10:27.496 size: 21.763794 MiB name: PDU_Pool 00:10:27.496 size: 19.513306 MiB name: SCSI_TASK_Pool 00:10:27.496 size: 4.133484 MiB name: evtpool_57778 00:10:27.496 size: 0.026123 MiB name: Session_Pool 00:10:27.496 end mempools------- 00:10:27.496 6 memzones totaling size 4.142822 MiB 00:10:27.496 size: 1.000366 MiB name: RG_ring_0_57778 00:10:27.496 size: 1.000366 MiB name: RG_ring_1_57778 00:10:27.496 size: 1.000366 MiB name: RG_ring_4_57778 00:10:27.496 size: 1.000366 MiB name: RG_ring_5_57778 00:10:27.496 size: 0.125366 MiB name: RG_ring_2_57778 00:10:27.496 size: 0.015991 MiB name: RG_ring_3_57778 00:10:27.496 end memzones------- 00:10:27.496 13:30:26 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:10:27.496 heap id: 0 total size: 824.000000 MiB number of busy elements: 313 number of free elements: 18 00:10:27.496 list of free elements. size: 16.781860 MiB 00:10:27.496 element at address: 0x200006400000 with size: 1.995972 MiB 00:10:27.496 element at address: 0x20000a600000 with size: 1.995972 MiB 00:10:27.496 element at address: 0x200003e00000 with size: 1.991028 MiB 00:10:27.496 element at address: 0x200019500040 with size: 0.999939 MiB 00:10:27.496 element at address: 0x200019900040 with size: 0.999939 MiB 00:10:27.496 element at address: 0x200019a00000 with size: 0.999084 MiB 00:10:27.496 element at address: 0x200032600000 with size: 0.994324 MiB 00:10:27.496 element at address: 0x200000400000 with size: 0.992004 MiB 00:10:27.496 element at address: 0x200019200000 with size: 0.959656 MiB 00:10:27.496 element at address: 0x200019d00040 with size: 0.936401 MiB 00:10:27.496 element at address: 0x200000200000 with size: 0.716980 MiB 00:10:27.496 element at address: 0x20001b400000 with size: 0.563416 MiB 00:10:27.496 element at address: 0x200000c00000 with size: 0.489197 MiB 00:10:27.496 element at address: 0x200019600000 with size: 0.487976 MiB 00:10:27.496 element at address: 0x200019e00000 with size: 0.485413 MiB 00:10:27.496 element at address: 0x200012c00000 with size: 0.433228 MiB 00:10:27.496 element at address: 0x200028800000 with size: 0.390442 MiB 00:10:27.496 element at address: 0x200000800000 with size: 0.350891 MiB 00:10:27.496 list of standard malloc elements. size: 199.287231 MiB 00:10:27.496 element at address: 0x20000a7fef80 with size: 132.000183 MiB 00:10:27.496 element at address: 0x2000065fef80 with size: 64.000183 MiB 00:10:27.496 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:10:27.496 element at address: 0x2000197fff80 with size: 1.000183 MiB 00:10:27.496 element at address: 0x200019bfff80 with size: 1.000183 MiB 00:10:27.496 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:10:27.496 element at address: 0x200019deff40 with size: 0.062683 MiB 00:10:27.496 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:10:27.496 element at address: 0x20000a5ff040 with size: 0.000427 MiB 00:10:27.496 element at address: 0x200019defdc0 with size: 0.000366 MiB 00:10:27.496 element at address: 0x200012bff040 with size: 0.000305 MiB 00:10:27.496 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:10:27.496 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:10:27.496 element at address: 0x2000004fdf40 with size: 0.000244 MiB 00:10:27.496 element at address: 0x2000004fe040 with size: 0.000244 MiB 00:10:27.496 element at address: 0x2000004fe140 with size: 0.000244 MiB 00:10:27.496 element at address: 0x2000004fe240 with size: 0.000244 MiB 00:10:27.496 element at address: 0x2000004fe340 with size: 0.000244 MiB 00:10:27.496 element at address: 0x2000004fe440 with size: 0.000244 MiB 00:10:27.496 element at address: 0x2000004fe540 with size: 0.000244 MiB 00:10:27.496 element at address: 0x2000004fe640 with size: 0.000244 MiB 00:10:27.496 element at address: 0x2000004fe740 with size: 0.000244 MiB 00:10:27.496 element at address: 0x2000004fe840 with size: 0.000244 MiB 00:10:27.496 element at address: 0x2000004fe940 with size: 0.000244 MiB 00:10:27.496 element at address: 0x2000004fea40 with size: 0.000244 MiB 00:10:27.496 element at address: 0x2000004feb40 with size: 0.000244 MiB 00:10:27.496 element at address: 0x2000004fec40 with size: 0.000244 MiB 00:10:27.496 element at address: 0x2000004fed40 with size: 0.000244 MiB 00:10:27.496 element at address: 0x2000004fee40 with size: 0.000244 MiB 00:10:27.496 element at address: 0x2000004fef40 with size: 0.000244 MiB 00:10:27.496 element at address: 0x2000004ff040 with size: 0.000244 MiB 00:10:27.496 element at address: 0x2000004ff140 with size: 0.000244 MiB 00:10:27.496 element at address: 0x2000004ff240 with size: 0.000244 MiB 00:10:27.496 element at address: 0x2000004ff340 with size: 0.000244 MiB 00:10:27.496 element at address: 0x2000004ff440 with size: 0.000244 MiB 00:10:27.496 element at address: 0x2000004ff540 with size: 0.000244 MiB 00:10:27.496 element at address: 0x2000004ff640 with size: 0.000244 MiB 00:10:27.496 element at address: 0x2000004ff740 with size: 0.000244 MiB 00:10:27.496 element at address: 0x2000004ff840 with size: 0.000244 MiB 00:10:27.496 element at address: 0x2000004ff940 with size: 0.000244 MiB 00:10:27.496 element at address: 0x2000004ffbc0 with size: 0.000244 MiB 00:10:27.496 element at address: 0x2000004ffcc0 with size: 0.000244 MiB 00:10:27.496 element at address: 0x2000004ffdc0 with size: 0.000244 MiB 00:10:27.496 element at address: 0x20000087e1c0 with size: 0.000244 MiB 00:10:27.496 element at address: 0x20000087e2c0 with size: 0.000244 MiB 00:10:27.496 element at address: 0x20000087e3c0 with size: 0.000244 MiB 00:10:27.496 element at address: 0x20000087e4c0 with size: 0.000244 MiB 00:10:27.496 element at address: 0x20000087e5c0 with size: 0.000244 MiB 00:10:27.496 element at address: 0x20000087e6c0 with size: 0.000244 MiB 00:10:27.496 element at address: 0x20000087e7c0 with size: 0.000244 MiB 00:10:27.496 element at address: 0x20000087e8c0 with size: 0.000244 MiB 00:10:27.496 element at address: 0x20000087e9c0 with size: 0.000244 MiB 00:10:27.496 element at address: 0x20000087eac0 with size: 0.000244 MiB 00:10:27.496 element at address: 0x20000087ebc0 with size: 0.000244 MiB 00:10:27.496 element at address: 0x20000087ecc0 with size: 0.000244 MiB 00:10:27.496 element at address: 0x20000087edc0 with size: 0.000244 MiB 00:10:27.496 element at address: 0x20000087eec0 with size: 0.000244 MiB 00:10:27.496 element at address: 0x20000087efc0 with size: 0.000244 MiB 00:10:27.496 element at address: 0x20000087f0c0 with size: 0.000244 MiB 00:10:27.496 element at address: 0x20000087f1c0 with size: 0.000244 MiB 00:10:27.496 element at address: 0x20000087f2c0 with size: 0.000244 MiB 00:10:27.496 element at address: 0x20000087f3c0 with size: 0.000244 MiB 00:10:27.496 element at address: 0x20000087f4c0 with size: 0.000244 MiB 00:10:27.496 element at address: 0x2000008ff800 with size: 0.000244 MiB 00:10:27.496 element at address: 0x2000008ffa80 with size: 0.000244 MiB 00:10:27.496 element at address: 0x200000c7d3c0 with size: 0.000244 MiB 00:10:27.496 element at address: 0x200000c7d4c0 with size: 0.000244 MiB 00:10:27.496 element at address: 0x200000c7d5c0 with size: 0.000244 MiB 00:10:27.496 element at address: 0x200000c7d6c0 with size: 0.000244 MiB 00:10:27.496 element at address: 0x200000c7d7c0 with size: 0.000244 MiB 00:10:27.496 element at address: 0x200000c7d8c0 with size: 0.000244 MiB 00:10:27.496 element at address: 0x200000c7d9c0 with size: 0.000244 MiB 00:10:27.496 element at address: 0x200000c7dac0 with size: 0.000244 MiB 00:10:27.496 element at address: 0x200000c7dbc0 with size: 0.000244 MiB 00:10:27.496 element at address: 0x200000c7dcc0 with size: 0.000244 MiB 00:10:27.496 element at address: 0x200000c7ddc0 with size: 0.000244 MiB 00:10:27.496 element at address: 0x200000c7dec0 with size: 0.000244 MiB 00:10:27.496 element at address: 0x200000c7dfc0 with size: 0.000244 MiB 00:10:27.496 element at address: 0x200000c7e0c0 with size: 0.000244 MiB 00:10:27.496 element at address: 0x200000c7e1c0 with size: 0.000244 MiB 00:10:27.496 element at address: 0x200000c7e2c0 with size: 0.000244 MiB 00:10:27.496 element at address: 0x200000c7e3c0 with size: 0.000244 MiB 00:10:27.496 element at address: 0x200000c7e4c0 with size: 0.000244 MiB 00:10:27.496 element at address: 0x200000c7e5c0 with size: 0.000244 MiB 00:10:27.496 element at address: 0x200000c7e6c0 with size: 0.000244 MiB 00:10:27.496 element at address: 0x200000c7e7c0 with size: 0.000244 MiB 00:10:27.496 element at address: 0x200000c7e8c0 with size: 0.000244 MiB 00:10:27.496 element at address: 0x200000c7e9c0 with size: 0.000244 MiB 00:10:27.496 element at address: 0x200000c7eac0 with size: 0.000244 MiB 00:10:27.496 element at address: 0x200000c7ebc0 with size: 0.000244 MiB 00:10:27.496 element at address: 0x200000cfef00 with size: 0.000244 MiB 00:10:27.496 element at address: 0x200000cff000 with size: 0.000244 MiB 00:10:27.496 element at address: 0x20000a5ff200 with size: 0.000244 MiB 00:10:27.496 element at address: 0x20000a5ff300 with size: 0.000244 MiB 00:10:27.496 element at address: 0x20000a5ff400 with size: 0.000244 MiB 00:10:27.496 element at address: 0x20000a5ff500 with size: 0.000244 MiB 00:10:27.496 element at address: 0x20000a5ff600 with size: 0.000244 MiB 00:10:27.496 element at address: 0x20000a5ff700 with size: 0.000244 MiB 00:10:27.496 element at address: 0x20000a5ff800 with size: 0.000244 MiB 00:10:27.496 element at address: 0x20000a5ff900 with size: 0.000244 MiB 00:10:27.496 element at address: 0x20000a5ffa00 with size: 0.000244 MiB 00:10:27.496 element at address: 0x20000a5ffb00 with size: 0.000244 MiB 00:10:27.496 element at address: 0x20000a5ffc00 with size: 0.000244 MiB 00:10:27.496 element at address: 0x20000a5ffd00 with size: 0.000244 MiB 00:10:27.496 element at address: 0x20000a5ffe00 with size: 0.000244 MiB 00:10:27.497 element at address: 0x20000a5fff00 with size: 0.000244 MiB 00:10:27.497 element at address: 0x200012bff180 with size: 0.000244 MiB 00:10:27.497 element at address: 0x200012bff280 with size: 0.000244 MiB 00:10:27.497 element at address: 0x200012bff380 with size: 0.000244 MiB 00:10:27.497 element at address: 0x200012bff480 with size: 0.000244 MiB 00:10:27.497 element at address: 0x200012bff580 with size: 0.000244 MiB 00:10:27.497 element at address: 0x200012bff680 with size: 0.000244 MiB 00:10:27.497 element at address: 0x200012bff780 with size: 0.000244 MiB 00:10:27.497 element at address: 0x200012bff880 with size: 0.000244 MiB 00:10:27.497 element at address: 0x200012bff980 with size: 0.000244 MiB 00:10:27.497 element at address: 0x200012bffa80 with size: 0.000244 MiB 00:10:27.497 element at address: 0x200012bffb80 with size: 0.000244 MiB 00:10:27.497 element at address: 0x200012bffc80 with size: 0.000244 MiB 00:10:27.497 element at address: 0x200012bfff00 with size: 0.000244 MiB 00:10:27.497 element at address: 0x200012c6ee80 with size: 0.000244 MiB 00:10:27.497 element at address: 0x200012c6ef80 with size: 0.000244 MiB 00:10:27.497 element at address: 0x200012c6f080 with size: 0.000244 MiB 00:10:27.497 element at address: 0x200012c6f180 with size: 0.000244 MiB 00:10:27.497 element at address: 0x200012c6f280 with size: 0.000244 MiB 00:10:27.497 element at address: 0x200012c6f380 with size: 0.000244 MiB 00:10:27.497 element at address: 0x200012c6f480 with size: 0.000244 MiB 00:10:27.497 element at address: 0x200012c6f580 with size: 0.000244 MiB 00:10:27.497 element at address: 0x200012c6f680 with size: 0.000244 MiB 00:10:27.497 element at address: 0x200012c6f780 with size: 0.000244 MiB 00:10:27.497 element at address: 0x200012c6f880 with size: 0.000244 MiB 00:10:27.497 element at address: 0x200012cefbc0 with size: 0.000244 MiB 00:10:27.497 element at address: 0x2000192fdd00 with size: 0.000244 MiB 00:10:27.497 element at address: 0x20001967cec0 with size: 0.000244 MiB 00:10:27.497 element at address: 0x20001967cfc0 with size: 0.000244 MiB 00:10:27.497 element at address: 0x20001967d0c0 with size: 0.000244 MiB 00:10:27.497 element at address: 0x20001967d1c0 with size: 0.000244 MiB 00:10:27.497 element at address: 0x20001967d2c0 with size: 0.000244 MiB 00:10:27.497 element at address: 0x20001967d3c0 with size: 0.000244 MiB 00:10:27.497 element at address: 0x20001967d4c0 with size: 0.000244 MiB 00:10:27.497 element at address: 0x20001967d5c0 with size: 0.000244 MiB 00:10:27.497 element at address: 0x20001967d6c0 with size: 0.000244 MiB 00:10:27.497 element at address: 0x20001967d7c0 with size: 0.000244 MiB 00:10:27.497 element at address: 0x20001967d8c0 with size: 0.000244 MiB 00:10:27.497 element at address: 0x20001967d9c0 with size: 0.000244 MiB 00:10:27.497 element at address: 0x2000196fdd00 with size: 0.000244 MiB 00:10:27.497 element at address: 0x200019affc40 with size: 0.000244 MiB 00:10:27.497 element at address: 0x200019defbc0 with size: 0.000244 MiB 00:10:27.497 element at address: 0x200019defcc0 with size: 0.000244 MiB 00:10:27.497 element at address: 0x200019ebc680 with size: 0.000244 MiB 00:10:27.497 element at address: 0x20001b4903c0 with size: 0.000244 MiB 00:10:27.497 element at address: 0x20001b4904c0 with size: 0.000244 MiB 00:10:27.497 element at address: 0x20001b4905c0 with size: 0.000244 MiB 00:10:27.497 element at address: 0x20001b4906c0 with size: 0.000244 MiB 00:10:27.497 element at address: 0x20001b4907c0 with size: 0.000244 MiB 00:10:27.497 element at address: 0x20001b4908c0 with size: 0.000244 MiB 00:10:27.497 element at address: 0x20001b4909c0 with size: 0.000244 MiB 00:10:27.497 element at address: 0x20001b490ac0 with size: 0.000244 MiB 00:10:27.497 element at address: 0x20001b490bc0 with size: 0.000244 MiB 00:10:27.497 element at address: 0x20001b490cc0 with size: 0.000244 MiB 00:10:27.497 element at address: 0x20001b490dc0 with size: 0.000244 MiB 00:10:27.497 element at address: 0x20001b490ec0 with size: 0.000244 MiB 00:10:27.497 element at address: 0x20001b490fc0 with size: 0.000244 MiB 00:10:27.497 element at address: 0x20001b4910c0 with size: 0.000244 MiB 00:10:27.497 element at address: 0x20001b4911c0 with size: 0.000244 MiB 00:10:27.497 element at address: 0x20001b4912c0 with size: 0.000244 MiB 00:10:27.497 element at address: 0x20001b4913c0 with size: 0.000244 MiB 00:10:27.497 element at address: 0x20001b4914c0 with size: 0.000244 MiB 00:10:27.497 element at address: 0x20001b4915c0 with size: 0.000244 MiB 00:10:27.497 element at address: 0x20001b4916c0 with size: 0.000244 MiB 00:10:27.497 element at address: 0x20001b4917c0 with size: 0.000244 MiB 00:10:27.497 element at address: 0x20001b4918c0 with size: 0.000244 MiB 00:10:27.497 element at address: 0x20001b4919c0 with size: 0.000244 MiB 00:10:27.497 element at address: 0x20001b491ac0 with size: 0.000244 MiB 00:10:27.497 element at address: 0x20001b491bc0 with size: 0.000244 MiB 00:10:27.497 element at address: 0x20001b491cc0 with size: 0.000244 MiB 00:10:27.497 element at address: 0x20001b491dc0 with size: 0.000244 MiB 00:10:27.497 element at address: 0x20001b491ec0 with size: 0.000244 MiB 00:10:27.497 element at address: 0x20001b491fc0 with size: 0.000244 MiB 00:10:27.497 element at address: 0x20001b4920c0 with size: 0.000244 MiB 00:10:27.497 element at address: 0x20001b4921c0 with size: 0.000244 MiB 00:10:27.497 element at address: 0x20001b4922c0 with size: 0.000244 MiB 00:10:27.497 element at address: 0x20001b4923c0 with size: 0.000244 MiB 00:10:27.497 element at address: 0x20001b4924c0 with size: 0.000244 MiB 00:10:27.497 element at address: 0x20001b4925c0 with size: 0.000244 MiB 00:10:27.497 element at address: 0x20001b4926c0 with size: 0.000244 MiB 00:10:27.497 element at address: 0x20001b4927c0 with size: 0.000244 MiB 00:10:27.497 element at address: 0x20001b4928c0 with size: 0.000244 MiB 00:10:27.497 element at address: 0x20001b4929c0 with size: 0.000244 MiB 00:10:27.497 element at address: 0x20001b492ac0 with size: 0.000244 MiB 00:10:27.497 element at address: 0x20001b492bc0 with size: 0.000244 MiB 00:10:27.497 element at address: 0x20001b492cc0 with size: 0.000244 MiB 00:10:27.497 element at address: 0x20001b492dc0 with size: 0.000244 MiB 00:10:27.497 element at address: 0x20001b492ec0 with size: 0.000244 MiB 00:10:27.497 element at address: 0x20001b492fc0 with size: 0.000244 MiB 00:10:27.497 element at address: 0x20001b4930c0 with size: 0.000244 MiB 00:10:27.497 element at address: 0x20001b4931c0 with size: 0.000244 MiB 00:10:27.497 element at address: 0x20001b4932c0 with size: 0.000244 MiB 00:10:27.497 element at address: 0x20001b4933c0 with size: 0.000244 MiB 00:10:27.497 element at address: 0x20001b4934c0 with size: 0.000244 MiB 00:10:27.497 element at address: 0x20001b4935c0 with size: 0.000244 MiB 00:10:27.497 element at address: 0x20001b4936c0 with size: 0.000244 MiB 00:10:27.497 element at address: 0x20001b4937c0 with size: 0.000244 MiB 00:10:27.497 element at address: 0x20001b4938c0 with size: 0.000244 MiB 00:10:27.497 element at address: 0x20001b4939c0 with size: 0.000244 MiB 00:10:27.497 element at address: 0x20001b493ac0 with size: 0.000244 MiB 00:10:27.497 element at address: 0x20001b493bc0 with size: 0.000244 MiB 00:10:27.497 element at address: 0x20001b493cc0 with size: 0.000244 MiB 00:10:27.497 element at address: 0x20001b493dc0 with size: 0.000244 MiB 00:10:27.497 element at address: 0x20001b493ec0 with size: 0.000244 MiB 00:10:27.497 element at address: 0x20001b493fc0 with size: 0.000244 MiB 00:10:27.497 element at address: 0x20001b4940c0 with size: 0.000244 MiB 00:10:27.497 element at address: 0x20001b4941c0 with size: 0.000244 MiB 00:10:27.497 element at address: 0x20001b4942c0 with size: 0.000244 MiB 00:10:27.497 element at address: 0x20001b4943c0 with size: 0.000244 MiB 00:10:27.497 element at address: 0x20001b4944c0 with size: 0.000244 MiB 00:10:27.497 element at address: 0x20001b4945c0 with size: 0.000244 MiB 00:10:27.497 element at address: 0x20001b4946c0 with size: 0.000244 MiB 00:10:27.497 element at address: 0x20001b4947c0 with size: 0.000244 MiB 00:10:27.497 element at address: 0x20001b4948c0 with size: 0.000244 MiB 00:10:27.497 element at address: 0x20001b4949c0 with size: 0.000244 MiB 00:10:27.497 element at address: 0x20001b494ac0 with size: 0.000244 MiB 00:10:27.497 element at address: 0x20001b494bc0 with size: 0.000244 MiB 00:10:27.497 element at address: 0x20001b494cc0 with size: 0.000244 MiB 00:10:27.497 element at address: 0x20001b494dc0 with size: 0.000244 MiB 00:10:27.497 element at address: 0x20001b494ec0 with size: 0.000244 MiB 00:10:27.497 element at address: 0x20001b494fc0 with size: 0.000244 MiB 00:10:27.497 element at address: 0x20001b4950c0 with size: 0.000244 MiB 00:10:27.497 element at address: 0x20001b4951c0 with size: 0.000244 MiB 00:10:27.497 element at address: 0x20001b4952c0 with size: 0.000244 MiB 00:10:27.497 element at address: 0x20001b4953c0 with size: 0.000244 MiB 00:10:27.498 element at address: 0x200028863f40 with size: 0.000244 MiB 00:10:27.498 element at address: 0x200028864040 with size: 0.000244 MiB 00:10:27.498 element at address: 0x20002886ad00 with size: 0.000244 MiB 00:10:27.498 element at address: 0x20002886af80 with size: 0.000244 MiB 00:10:27.498 element at address: 0x20002886b080 with size: 0.000244 MiB 00:10:27.498 element at address: 0x20002886b180 with size: 0.000244 MiB 00:10:27.498 element at address: 0x20002886b280 with size: 0.000244 MiB 00:10:27.498 element at address: 0x20002886b380 with size: 0.000244 MiB 00:10:27.498 element at address: 0x20002886b480 with size: 0.000244 MiB 00:10:27.498 element at address: 0x20002886b580 with size: 0.000244 MiB 00:10:27.498 element at address: 0x20002886b680 with size: 0.000244 MiB 00:10:27.498 element at address: 0x20002886b780 with size: 0.000244 MiB 00:10:27.498 element at address: 0x20002886b880 with size: 0.000244 MiB 00:10:27.498 element at address: 0x20002886b980 with size: 0.000244 MiB 00:10:27.498 element at address: 0x20002886ba80 with size: 0.000244 MiB 00:10:27.498 element at address: 0x20002886bb80 with size: 0.000244 MiB 00:10:27.498 element at address: 0x20002886bc80 with size: 0.000244 MiB 00:10:27.498 element at address: 0x20002886bd80 with size: 0.000244 MiB 00:10:27.498 element at address: 0x20002886be80 with size: 0.000244 MiB 00:10:27.498 element at address: 0x20002886bf80 with size: 0.000244 MiB 00:10:27.498 element at address: 0x20002886c080 with size: 0.000244 MiB 00:10:27.498 element at address: 0x20002886c180 with size: 0.000244 MiB 00:10:27.498 element at address: 0x20002886c280 with size: 0.000244 MiB 00:10:27.498 element at address: 0x20002886c380 with size: 0.000244 MiB 00:10:27.498 element at address: 0x20002886c480 with size: 0.000244 MiB 00:10:27.498 element at address: 0x20002886c580 with size: 0.000244 MiB 00:10:27.498 element at address: 0x20002886c680 with size: 0.000244 MiB 00:10:27.498 element at address: 0x20002886c780 with size: 0.000244 MiB 00:10:27.498 element at address: 0x20002886c880 with size: 0.000244 MiB 00:10:27.498 element at address: 0x20002886c980 with size: 0.000244 MiB 00:10:27.498 element at address: 0x20002886ca80 with size: 0.000244 MiB 00:10:27.498 element at address: 0x20002886cb80 with size: 0.000244 MiB 00:10:27.498 element at address: 0x20002886cc80 with size: 0.000244 MiB 00:10:27.498 element at address: 0x20002886cd80 with size: 0.000244 MiB 00:10:27.498 element at address: 0x20002886ce80 with size: 0.000244 MiB 00:10:27.498 element at address: 0x20002886cf80 with size: 0.000244 MiB 00:10:27.498 element at address: 0x20002886d080 with size: 0.000244 MiB 00:10:27.498 element at address: 0x20002886d180 with size: 0.000244 MiB 00:10:27.498 element at address: 0x20002886d280 with size: 0.000244 MiB 00:10:27.498 element at address: 0x20002886d380 with size: 0.000244 MiB 00:10:27.498 element at address: 0x20002886d480 with size: 0.000244 MiB 00:10:27.498 element at address: 0x20002886d580 with size: 0.000244 MiB 00:10:27.498 element at address: 0x20002886d680 with size: 0.000244 MiB 00:10:27.498 element at address: 0x20002886d780 with size: 0.000244 MiB 00:10:27.498 element at address: 0x20002886d880 with size: 0.000244 MiB 00:10:27.498 element at address: 0x20002886d980 with size: 0.000244 MiB 00:10:27.498 element at address: 0x20002886da80 with size: 0.000244 MiB 00:10:27.498 element at address: 0x20002886db80 with size: 0.000244 MiB 00:10:27.498 element at address: 0x20002886dc80 with size: 0.000244 MiB 00:10:27.498 element at address: 0x20002886dd80 with size: 0.000244 MiB 00:10:27.498 element at address: 0x20002886de80 with size: 0.000244 MiB 00:10:27.498 element at address: 0x20002886df80 with size: 0.000244 MiB 00:10:27.498 element at address: 0x20002886e080 with size: 0.000244 MiB 00:10:27.498 element at address: 0x20002886e180 with size: 0.000244 MiB 00:10:27.498 element at address: 0x20002886e280 with size: 0.000244 MiB 00:10:27.498 element at address: 0x20002886e380 with size: 0.000244 MiB 00:10:27.498 element at address: 0x20002886e480 with size: 0.000244 MiB 00:10:27.498 element at address: 0x20002886e580 with size: 0.000244 MiB 00:10:27.498 element at address: 0x20002886e680 with size: 0.000244 MiB 00:10:27.498 element at address: 0x20002886e780 with size: 0.000244 MiB 00:10:27.498 element at address: 0x20002886e880 with size: 0.000244 MiB 00:10:27.498 element at address: 0x20002886e980 with size: 0.000244 MiB 00:10:27.498 element at address: 0x20002886ea80 with size: 0.000244 MiB 00:10:27.498 element at address: 0x20002886eb80 with size: 0.000244 MiB 00:10:27.498 element at address: 0x20002886ec80 with size: 0.000244 MiB 00:10:27.498 element at address: 0x20002886ed80 with size: 0.000244 MiB 00:10:27.498 element at address: 0x20002886ee80 with size: 0.000244 MiB 00:10:27.498 element at address: 0x20002886ef80 with size: 0.000244 MiB 00:10:27.498 element at address: 0x20002886f080 with size: 0.000244 MiB 00:10:27.498 element at address: 0x20002886f180 with size: 0.000244 MiB 00:10:27.498 element at address: 0x20002886f280 with size: 0.000244 MiB 00:10:27.498 element at address: 0x20002886f380 with size: 0.000244 MiB 00:10:27.498 element at address: 0x20002886f480 with size: 0.000244 MiB 00:10:27.498 element at address: 0x20002886f580 with size: 0.000244 MiB 00:10:27.498 element at address: 0x20002886f680 with size: 0.000244 MiB 00:10:27.498 element at address: 0x20002886f780 with size: 0.000244 MiB 00:10:27.498 element at address: 0x20002886f880 with size: 0.000244 MiB 00:10:27.498 element at address: 0x20002886f980 with size: 0.000244 MiB 00:10:27.498 element at address: 0x20002886fa80 with size: 0.000244 MiB 00:10:27.498 element at address: 0x20002886fb80 with size: 0.000244 MiB 00:10:27.498 element at address: 0x20002886fc80 with size: 0.000244 MiB 00:10:27.498 element at address: 0x20002886fd80 with size: 0.000244 MiB 00:10:27.498 element at address: 0x20002886fe80 with size: 0.000244 MiB 00:10:27.498 list of memzone associated elements. size: 607.930908 MiB 00:10:27.498 element at address: 0x20001b4954c0 with size: 211.416809 MiB 00:10:27.498 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:10:27.498 element at address: 0x20002886ff80 with size: 157.562622 MiB 00:10:27.498 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:10:27.498 element at address: 0x200012df1e40 with size: 100.055115 MiB 00:10:27.498 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_57778_0 00:10:27.498 element at address: 0x200000dff340 with size: 48.003113 MiB 00:10:27.498 associated memzone info: size: 48.002930 MiB name: MP_msgpool_57778_0 00:10:27.498 element at address: 0x200003ffdb40 with size: 36.008972 MiB 00:10:27.498 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_57778_0 00:10:27.498 element at address: 0x200019fbe900 with size: 20.255615 MiB 00:10:27.498 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:10:27.498 element at address: 0x2000327feb00 with size: 18.005127 MiB 00:10:27.498 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:10:27.498 element at address: 0x2000004ffec0 with size: 3.000305 MiB 00:10:27.498 associated memzone info: size: 3.000122 MiB name: MP_evtpool_57778_0 00:10:27.498 element at address: 0x2000009ffdc0 with size: 2.000549 MiB 00:10:27.498 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_57778 00:10:27.499 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:10:27.499 associated memzone info: size: 1.007996 MiB name: MP_evtpool_57778 00:10:27.499 element at address: 0x2000196fde00 with size: 1.008179 MiB 00:10:27.499 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:10:27.499 element at address: 0x200019ebc780 with size: 1.008179 MiB 00:10:27.499 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:10:27.499 element at address: 0x2000192fde00 with size: 1.008179 MiB 00:10:27.499 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:10:27.499 element at address: 0x200012cefcc0 with size: 1.008179 MiB 00:10:27.499 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:10:27.499 element at address: 0x200000cff100 with size: 1.000549 MiB 00:10:27.499 associated memzone info: size: 1.000366 MiB name: RG_ring_0_57778 00:10:27.499 element at address: 0x2000008ffb80 with size: 1.000549 MiB 00:10:27.499 associated memzone info: size: 1.000366 MiB name: RG_ring_1_57778 00:10:27.499 element at address: 0x200019affd40 with size: 1.000549 MiB 00:10:27.499 associated memzone info: size: 1.000366 MiB name: RG_ring_4_57778 00:10:27.499 element at address: 0x2000326fe8c0 with size: 1.000549 MiB 00:10:27.499 associated memzone info: size: 1.000366 MiB name: RG_ring_5_57778 00:10:27.499 element at address: 0x20000087f5c0 with size: 0.500549 MiB 00:10:27.499 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_57778 00:10:27.499 element at address: 0x200000c7ecc0 with size: 0.500549 MiB 00:10:27.499 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_57778 00:10:27.499 element at address: 0x20001967dac0 with size: 0.500549 MiB 00:10:27.499 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:10:27.499 element at address: 0x200012c6f980 with size: 0.500549 MiB 00:10:27.499 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:10:27.499 element at address: 0x200019e7c440 with size: 0.250549 MiB 00:10:27.499 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:10:27.499 element at address: 0x2000002b78c0 with size: 0.125549 MiB 00:10:27.499 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_57778 00:10:27.499 element at address: 0x20000085df80 with size: 0.125549 MiB 00:10:27.499 associated memzone info: size: 0.125366 MiB name: RG_ring_2_57778 00:10:27.499 element at address: 0x2000192f5ac0 with size: 0.031799 MiB 00:10:27.499 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:10:27.499 element at address: 0x200028864140 with size: 0.023804 MiB 00:10:27.499 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:10:27.499 element at address: 0x200000859d40 with size: 0.016174 MiB 00:10:27.499 associated memzone info: size: 0.015991 MiB name: RG_ring_3_57778 00:10:27.499 element at address: 0x20002886a2c0 with size: 0.002502 MiB 00:10:27.499 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:10:27.499 element at address: 0x2000004ffa40 with size: 0.000366 MiB 00:10:27.499 associated memzone info: size: 0.000183 MiB name: MP_msgpool_57778 00:10:27.499 element at address: 0x2000008ff900 with size: 0.000366 MiB 00:10:27.499 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_57778 00:10:27.499 element at address: 0x200012bffd80 with size: 0.000366 MiB 00:10:27.499 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_57778 00:10:27.499 element at address: 0x20002886ae00 with size: 0.000366 MiB 00:10:27.499 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:10:27.499 13:30:26 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:10:27.499 13:30:26 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 57778 00:10:27.499 13:30:26 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 57778 ']' 00:10:27.499 13:30:26 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 57778 00:10:27.499 13:30:26 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:10:27.499 13:30:26 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:27.499 13:30:26 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57778 00:10:27.499 13:30:26 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:27.499 13:30:26 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:27.499 13:30:26 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57778' 00:10:27.499 killing process with pid 57778 00:10:27.499 13:30:26 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 57778 00:10:27.499 13:30:26 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 57778 00:10:30.053 00:10:30.053 real 0m4.186s 00:10:30.053 user 0m4.085s 00:10:30.053 sys 0m0.594s 00:10:30.053 13:30:29 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:30.053 13:30:29 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:10:30.053 ************************************ 00:10:30.053 END TEST dpdk_mem_utility 00:10:30.053 ************************************ 00:10:30.053 13:30:29 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:10:30.053 13:30:29 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:30.053 13:30:29 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:30.053 13:30:29 -- common/autotest_common.sh@10 -- # set +x 00:10:30.053 ************************************ 00:10:30.053 START TEST event 00:10:30.053 ************************************ 00:10:30.053 13:30:29 event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:10:30.312 * Looking for test storage... 00:10:30.312 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:10:30.312 13:30:29 event -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:30.312 13:30:29 event -- common/autotest_common.sh@1693 -- # lcov --version 00:10:30.312 13:30:29 event -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:30.312 13:30:29 event -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:30.312 13:30:29 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:30.312 13:30:29 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:30.312 13:30:29 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:30.312 13:30:29 event -- scripts/common.sh@336 -- # IFS=.-: 00:10:30.312 13:30:29 event -- scripts/common.sh@336 -- # read -ra ver1 00:10:30.312 13:30:29 event -- scripts/common.sh@337 -- # IFS=.-: 00:10:30.312 13:30:29 event -- scripts/common.sh@337 -- # read -ra ver2 00:10:30.312 13:30:29 event -- scripts/common.sh@338 -- # local 'op=<' 00:10:30.312 13:30:29 event -- scripts/common.sh@340 -- # ver1_l=2 00:10:30.312 13:30:29 event -- scripts/common.sh@341 -- # ver2_l=1 00:10:30.312 13:30:29 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:30.312 13:30:29 event -- scripts/common.sh@344 -- # case "$op" in 00:10:30.312 13:30:29 event -- scripts/common.sh@345 -- # : 1 00:10:30.312 13:30:29 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:30.312 13:30:29 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:30.312 13:30:29 event -- scripts/common.sh@365 -- # decimal 1 00:10:30.312 13:30:29 event -- scripts/common.sh@353 -- # local d=1 00:10:30.312 13:30:29 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:30.312 13:30:29 event -- scripts/common.sh@355 -- # echo 1 00:10:30.312 13:30:29 event -- scripts/common.sh@365 -- # ver1[v]=1 00:10:30.312 13:30:29 event -- scripts/common.sh@366 -- # decimal 2 00:10:30.312 13:30:29 event -- scripts/common.sh@353 -- # local d=2 00:10:30.312 13:30:29 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:30.312 13:30:29 event -- scripts/common.sh@355 -- # echo 2 00:10:30.312 13:30:29 event -- scripts/common.sh@366 -- # ver2[v]=2 00:10:30.312 13:30:29 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:30.312 13:30:29 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:30.312 13:30:29 event -- scripts/common.sh@368 -- # return 0 00:10:30.312 13:30:29 event -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:30.312 13:30:29 event -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:30.312 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:30.312 --rc genhtml_branch_coverage=1 00:10:30.312 --rc genhtml_function_coverage=1 00:10:30.312 --rc genhtml_legend=1 00:10:30.312 --rc geninfo_all_blocks=1 00:10:30.312 --rc geninfo_unexecuted_blocks=1 00:10:30.312 00:10:30.312 ' 00:10:30.312 13:30:29 event -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:30.312 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:30.312 --rc genhtml_branch_coverage=1 00:10:30.312 --rc genhtml_function_coverage=1 00:10:30.312 --rc genhtml_legend=1 00:10:30.312 --rc geninfo_all_blocks=1 00:10:30.312 --rc geninfo_unexecuted_blocks=1 00:10:30.312 00:10:30.312 ' 00:10:30.312 13:30:29 event -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:30.312 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:30.312 --rc genhtml_branch_coverage=1 00:10:30.312 --rc genhtml_function_coverage=1 00:10:30.312 --rc genhtml_legend=1 00:10:30.312 --rc geninfo_all_blocks=1 00:10:30.312 --rc geninfo_unexecuted_blocks=1 00:10:30.312 00:10:30.312 ' 00:10:30.312 13:30:29 event -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:30.312 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:30.312 --rc genhtml_branch_coverage=1 00:10:30.312 --rc genhtml_function_coverage=1 00:10:30.312 --rc genhtml_legend=1 00:10:30.313 --rc geninfo_all_blocks=1 00:10:30.313 --rc geninfo_unexecuted_blocks=1 00:10:30.313 00:10:30.313 ' 00:10:30.313 13:30:29 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:10:30.313 13:30:29 event -- bdev/nbd_common.sh@6 -- # set -e 00:10:30.313 13:30:29 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:10:30.313 13:30:29 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:10:30.313 13:30:29 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:30.313 13:30:29 event -- common/autotest_common.sh@10 -- # set +x 00:10:30.313 ************************************ 00:10:30.313 START TEST event_perf 00:10:30.313 ************************************ 00:10:30.313 13:30:29 event.event_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:10:30.313 Running I/O for 1 seconds...[2024-11-20 13:30:29.728892] Starting SPDK v25.01-pre git sha1 82b85d9ca / DPDK 24.03.0 initialization... 00:10:30.313 [2024-11-20 13:30:29.729136] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57886 ] 00:10:30.572 [2024-11-20 13:30:29.912763] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:30.572 [2024-11-20 13:30:30.050424] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:30.572 [2024-11-20 13:30:30.050467] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:30.572 Running I/O for 1 seconds...[2024-11-20 13:30:30.050651] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:30.572 [2024-11-20 13:30:30.050681] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:31.949 00:10:31.949 lcore 0: 185881 00:10:31.949 lcore 1: 185881 00:10:31.949 lcore 2: 185882 00:10:31.949 lcore 3: 185882 00:10:31.949 done. 00:10:31.949 00:10:31.949 real 0m1.634s 00:10:31.949 user 0m4.349s 00:10:31.949 ************************************ 00:10:31.949 END TEST event_perf 00:10:31.949 ************************************ 00:10:31.949 sys 0m0.154s 00:10:31.949 13:30:31 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:31.949 13:30:31 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:10:31.949 13:30:31 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:10:31.949 13:30:31 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:31.949 13:30:31 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:31.949 13:30:31 event -- common/autotest_common.sh@10 -- # set +x 00:10:31.949 ************************************ 00:10:31.949 START TEST event_reactor 00:10:31.949 ************************************ 00:10:31.949 13:30:31 event.event_reactor -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:10:31.949 [2024-11-20 13:30:31.428804] Starting SPDK v25.01-pre git sha1 82b85d9ca / DPDK 24.03.0 initialization... 00:10:31.949 [2024-11-20 13:30:31.428927] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57926 ] 00:10:32.207 [2024-11-20 13:30:31.613472] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:32.465 [2024-11-20 13:30:31.741008] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:33.843 test_start 00:10:33.843 oneshot 00:10:33.843 tick 100 00:10:33.843 tick 100 00:10:33.843 tick 250 00:10:33.843 tick 100 00:10:33.843 tick 100 00:10:33.843 tick 100 00:10:33.843 tick 250 00:10:33.843 tick 500 00:10:33.843 tick 100 00:10:33.843 tick 100 00:10:33.843 tick 250 00:10:33.843 tick 100 00:10:33.843 tick 100 00:10:33.843 test_end 00:10:33.843 00:10:33.843 real 0m1.592s 00:10:33.843 user 0m1.378s 00:10:33.843 sys 0m0.106s 00:10:33.843 ************************************ 00:10:33.843 END TEST event_reactor 00:10:33.843 ************************************ 00:10:33.843 13:30:32 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:33.843 13:30:32 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:10:33.843 13:30:33 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:10:33.843 13:30:33 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:33.843 13:30:33 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:33.843 13:30:33 event -- common/autotest_common.sh@10 -- # set +x 00:10:33.843 ************************************ 00:10:33.843 START TEST event_reactor_perf 00:10:33.843 ************************************ 00:10:33.843 13:30:33 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:10:33.843 [2024-11-20 13:30:33.095489] Starting SPDK v25.01-pre git sha1 82b85d9ca / DPDK 24.03.0 initialization... 00:10:33.843 [2024-11-20 13:30:33.095618] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57962 ] 00:10:33.843 [2024-11-20 13:30:33.277992] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:34.102 [2024-11-20 13:30:33.397371] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:35.480 test_start 00:10:35.480 test_end 00:10:35.480 Performance: 360071 events per second 00:10:35.480 ************************************ 00:10:35.480 END TEST event_reactor_perf 00:10:35.480 ************************************ 00:10:35.480 00:10:35.480 real 0m1.588s 00:10:35.480 user 0m1.357s 00:10:35.480 sys 0m0.121s 00:10:35.480 13:30:34 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:35.480 13:30:34 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:10:35.480 13:30:34 event -- event/event.sh@49 -- # uname -s 00:10:35.480 13:30:34 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:10:35.480 13:30:34 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:10:35.480 13:30:34 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:35.480 13:30:34 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:35.480 13:30:34 event -- common/autotest_common.sh@10 -- # set +x 00:10:35.480 ************************************ 00:10:35.480 START TEST event_scheduler 00:10:35.480 ************************************ 00:10:35.480 13:30:34 event.event_scheduler -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:10:35.480 * Looking for test storage... 00:10:35.480 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:10:35.480 13:30:34 event.event_scheduler -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:35.480 13:30:34 event.event_scheduler -- common/autotest_common.sh@1693 -- # lcov --version 00:10:35.480 13:30:34 event.event_scheduler -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:35.480 13:30:34 event.event_scheduler -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:35.480 13:30:34 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:35.480 13:30:34 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:35.480 13:30:34 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:35.480 13:30:34 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:10:35.480 13:30:34 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:10:35.480 13:30:34 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:10:35.480 13:30:34 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:10:35.480 13:30:34 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:10:35.480 13:30:34 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:10:35.480 13:30:34 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:10:35.480 13:30:34 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:35.480 13:30:34 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:10:35.480 13:30:34 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:10:35.480 13:30:34 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:35.480 13:30:34 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:35.480 13:30:34 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:10:35.480 13:30:34 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:10:35.480 13:30:34 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:35.480 13:30:34 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:10:35.480 13:30:34 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:10:35.480 13:30:34 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:10:35.480 13:30:34 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:10:35.480 13:30:34 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:35.480 13:30:34 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:10:35.480 13:30:34 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:10:35.480 13:30:34 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:35.480 13:30:34 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:35.480 13:30:34 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:10:35.480 13:30:34 event.event_scheduler -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:35.480 13:30:34 event.event_scheduler -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:35.480 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:35.480 --rc genhtml_branch_coverage=1 00:10:35.480 --rc genhtml_function_coverage=1 00:10:35.480 --rc genhtml_legend=1 00:10:35.480 --rc geninfo_all_blocks=1 00:10:35.480 --rc geninfo_unexecuted_blocks=1 00:10:35.480 00:10:35.480 ' 00:10:35.480 13:30:34 event.event_scheduler -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:35.480 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:35.480 --rc genhtml_branch_coverage=1 00:10:35.480 --rc genhtml_function_coverage=1 00:10:35.480 --rc genhtml_legend=1 00:10:35.480 --rc geninfo_all_blocks=1 00:10:35.480 --rc geninfo_unexecuted_blocks=1 00:10:35.480 00:10:35.480 ' 00:10:35.480 13:30:34 event.event_scheduler -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:35.480 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:35.480 --rc genhtml_branch_coverage=1 00:10:35.480 --rc genhtml_function_coverage=1 00:10:35.480 --rc genhtml_legend=1 00:10:35.480 --rc geninfo_all_blocks=1 00:10:35.480 --rc geninfo_unexecuted_blocks=1 00:10:35.480 00:10:35.480 ' 00:10:35.480 13:30:34 event.event_scheduler -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:35.480 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:35.480 --rc genhtml_branch_coverage=1 00:10:35.480 --rc genhtml_function_coverage=1 00:10:35.480 --rc genhtml_legend=1 00:10:35.480 --rc geninfo_all_blocks=1 00:10:35.480 --rc geninfo_unexecuted_blocks=1 00:10:35.480 00:10:35.480 ' 00:10:35.480 13:30:34 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:10:35.480 13:30:34 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:10:35.481 13:30:34 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=58038 00:10:35.481 13:30:34 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:10:35.481 13:30:34 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 58038 00:10:35.481 13:30:34 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 58038 ']' 00:10:35.481 13:30:34 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:35.481 13:30:34 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:35.481 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:35.481 13:30:34 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:35.481 13:30:34 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:35.481 13:30:34 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:10:35.741 [2024-11-20 13:30:35.008211] Starting SPDK v25.01-pre git sha1 82b85d9ca / DPDK 24.03.0 initialization... 00:10:35.741 [2024-11-20 13:30:35.008349] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58038 ] 00:10:35.741 [2024-11-20 13:30:35.192554] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:36.000 [2024-11-20 13:30:35.322254] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:36.000 [2024-11-20 13:30:35.322445] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:36.000 [2024-11-20 13:30:35.324187] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:36.000 [2024-11-20 13:30:35.324241] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:36.570 13:30:35 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:36.570 13:30:35 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:10:36.570 13:30:35 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:10:36.570 13:30:35 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.570 13:30:35 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:10:36.570 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:10:36.570 POWER: Cannot set governor of lcore 0 to userspace 00:10:36.570 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:10:36.570 POWER: Cannot set governor of lcore 0 to performance 00:10:36.570 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:10:36.570 POWER: Cannot set governor of lcore 0 to userspace 00:10:36.570 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:10:36.570 POWER: Cannot set governor of lcore 0 to userspace 00:10:36.570 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:10:36.570 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:10:36.570 POWER: Unable to set Power Management Environment for lcore 0 00:10:36.570 [2024-11-20 13:30:35.893003] dpdk_governor.c: 135:_init_core: *ERROR*: Failed to initialize on core0 00:10:36.570 [2024-11-20 13:30:35.893031] dpdk_governor.c: 196:_init: *ERROR*: Failed to initialize on core0 00:10:36.570 [2024-11-20 13:30:35.893045] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:10:36.570 [2024-11-20 13:30:35.893085] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:10:36.570 [2024-11-20 13:30:35.893097] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:10:36.570 [2024-11-20 13:30:35.893110] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:10:36.570 13:30:35 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.570 13:30:35 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:10:36.570 13:30:35 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.570 13:30:35 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:10:36.830 [2024-11-20 13:30:36.242207] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:10:36.830 13:30:36 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.830 13:30:36 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:10:36.830 13:30:36 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:36.830 13:30:36 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:36.830 13:30:36 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:10:36.830 ************************************ 00:10:36.830 START TEST scheduler_create_thread 00:10:36.830 ************************************ 00:10:36.830 13:30:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:10:36.830 13:30:36 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:10:36.830 13:30:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.830 13:30:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:36.830 2 00:10:36.830 13:30:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.830 13:30:36 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:10:36.830 13:30:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.830 13:30:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:36.830 3 00:10:36.830 13:30:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.830 13:30:36 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:10:36.830 13:30:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.830 13:30:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:36.830 4 00:10:36.830 13:30:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.830 13:30:36 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:10:36.830 13:30:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.830 13:30:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:36.830 5 00:10:36.830 13:30:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.830 13:30:36 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:10:36.830 13:30:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.830 13:30:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:36.830 6 00:10:36.830 13:30:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.830 13:30:36 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:10:36.830 13:30:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.830 13:30:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:37.090 7 00:10:37.090 13:30:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.090 13:30:36 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:10:37.090 13:30:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.090 13:30:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:37.090 8 00:10:37.090 13:30:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.090 13:30:36 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:10:37.090 13:30:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.090 13:30:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:37.090 9 00:10:37.090 13:30:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.090 13:30:36 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:10:37.090 13:30:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.090 13:30:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:37.090 10 00:10:37.090 13:30:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.090 13:30:36 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:10:37.090 13:30:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.090 13:30:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:37.090 13:30:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.090 13:30:36 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:10:37.090 13:30:36 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:10:37.090 13:30:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.090 13:30:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:37.090 13:30:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.090 13:30:36 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:10:37.090 13:30:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.090 13:30:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:38.467 13:30:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.467 13:30:37 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:10:38.467 13:30:37 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:10:38.467 13:30:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.467 13:30:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:39.404 ************************************ 00:10:39.404 END TEST scheduler_create_thread 00:10:39.404 ************************************ 00:10:39.404 13:30:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.404 00:10:39.404 real 0m2.625s 00:10:39.404 user 0m0.024s 00:10:39.404 sys 0m0.009s 00:10:39.404 13:30:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:39.404 13:30:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:39.663 13:30:38 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:10:39.663 13:30:38 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 58038 00:10:39.663 13:30:38 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 58038 ']' 00:10:39.663 13:30:38 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 58038 00:10:39.663 13:30:38 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:10:39.663 13:30:38 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:39.663 13:30:38 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58038 00:10:39.663 killing process with pid 58038 00:10:39.663 13:30:38 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:10:39.663 13:30:38 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:10:39.663 13:30:38 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58038' 00:10:39.663 13:30:38 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 58038 00:10:39.663 13:30:38 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 58038 00:10:39.923 [2024-11-20 13:30:39.359449] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:10:41.300 00:10:41.300 real 0m5.853s 00:10:41.300 user 0m10.009s 00:10:41.300 sys 0m0.529s 00:10:41.300 13:30:40 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:41.300 13:30:40 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:10:41.300 ************************************ 00:10:41.300 END TEST event_scheduler 00:10:41.300 ************************************ 00:10:41.300 13:30:40 event -- event/event.sh@51 -- # modprobe -n nbd 00:10:41.300 13:30:40 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:10:41.300 13:30:40 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:41.300 13:30:40 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:41.300 13:30:40 event -- common/autotest_common.sh@10 -- # set +x 00:10:41.300 ************************************ 00:10:41.300 START TEST app_repeat 00:10:41.301 ************************************ 00:10:41.301 13:30:40 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:10:41.301 13:30:40 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:41.301 13:30:40 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:41.301 13:30:40 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:10:41.301 13:30:40 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:10:41.301 13:30:40 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:10:41.301 13:30:40 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:10:41.301 13:30:40 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:10:41.301 13:30:40 event.app_repeat -- event/event.sh@19 -- # repeat_pid=58150 00:10:41.301 13:30:40 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:10:41.301 13:30:40 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:10:41.301 Process app_repeat pid: 58150 00:10:41.301 13:30:40 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 58150' 00:10:41.301 13:30:40 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:10:41.301 spdk_app_start Round 0 00:10:41.301 13:30:40 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:10:41.301 13:30:40 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58150 /var/tmp/spdk-nbd.sock 00:10:41.301 13:30:40 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58150 ']' 00:10:41.301 13:30:40 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:10:41.301 13:30:40 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:41.301 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:10:41.301 13:30:40 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:10:41.301 13:30:40 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:41.301 13:30:40 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:10:41.301 [2024-11-20 13:30:40.666030] Starting SPDK v25.01-pre git sha1 82b85d9ca / DPDK 24.03.0 initialization... 00:10:41.301 [2024-11-20 13:30:40.666170] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58150 ] 00:10:41.560 [2024-11-20 13:30:40.842256] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:41.560 [2024-11-20 13:30:40.969985] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:41.560 [2024-11-20 13:30:40.970014] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:42.541 13:30:41 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:42.541 13:30:41 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:10:42.541 13:30:41 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:10:42.541 Malloc0 00:10:42.541 13:30:41 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:10:42.801 Malloc1 00:10:42.801 13:30:42 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:10:42.801 13:30:42 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:42.801 13:30:42 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:10:42.801 13:30:42 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:10:42.801 13:30:42 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:42.801 13:30:42 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:10:42.801 13:30:42 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:10:42.801 13:30:42 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:42.801 13:30:42 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:10:42.801 13:30:42 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:10:42.801 13:30:42 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:42.801 13:30:42 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:10:42.801 13:30:42 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:10:42.801 13:30:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:10:42.801 13:30:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:42.801 13:30:42 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:10:43.062 /dev/nbd0 00:10:43.062 13:30:42 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:10:43.062 13:30:42 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:10:43.062 13:30:42 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:10:43.062 13:30:42 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:10:43.062 13:30:42 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:43.062 13:30:42 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:43.062 13:30:42 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:10:43.062 13:30:42 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:10:43.062 13:30:42 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:43.062 13:30:42 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:43.062 13:30:42 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:10:43.062 1+0 records in 00:10:43.062 1+0 records out 00:10:43.062 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000466764 s, 8.8 MB/s 00:10:43.062 13:30:42 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:43.062 13:30:42 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:10:43.062 13:30:42 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:43.062 13:30:42 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:43.062 13:30:42 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:10:43.062 13:30:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:43.062 13:30:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:43.062 13:30:42 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:10:43.321 /dev/nbd1 00:10:43.321 13:30:42 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:10:43.321 13:30:42 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:10:43.321 13:30:42 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:10:43.321 13:30:42 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:10:43.321 13:30:42 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:43.321 13:30:42 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:43.321 13:30:42 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:10:43.321 13:30:42 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:10:43.321 13:30:42 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:43.321 13:30:42 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:43.321 13:30:42 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:10:43.321 1+0 records in 00:10:43.321 1+0 records out 00:10:43.321 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000441452 s, 9.3 MB/s 00:10:43.321 13:30:42 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:43.321 13:30:42 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:10:43.321 13:30:42 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:43.321 13:30:42 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:43.321 13:30:42 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:10:43.321 13:30:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:43.321 13:30:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:43.321 13:30:42 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:10:43.321 13:30:42 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:43.322 13:30:42 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:43.581 13:30:42 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:10:43.581 { 00:10:43.581 "nbd_device": "/dev/nbd0", 00:10:43.581 "bdev_name": "Malloc0" 00:10:43.581 }, 00:10:43.581 { 00:10:43.581 "nbd_device": "/dev/nbd1", 00:10:43.581 "bdev_name": "Malloc1" 00:10:43.581 } 00:10:43.581 ]' 00:10:43.581 13:30:42 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:43.581 13:30:42 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:10:43.581 { 00:10:43.581 "nbd_device": "/dev/nbd0", 00:10:43.581 "bdev_name": "Malloc0" 00:10:43.581 }, 00:10:43.581 { 00:10:43.581 "nbd_device": "/dev/nbd1", 00:10:43.581 "bdev_name": "Malloc1" 00:10:43.581 } 00:10:43.581 ]' 00:10:43.581 13:30:42 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:10:43.581 /dev/nbd1' 00:10:43.581 13:30:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:10:43.581 /dev/nbd1' 00:10:43.581 13:30:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:43.581 13:30:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:10:43.581 13:30:42 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:10:43.581 13:30:42 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:10:43.581 13:30:42 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:10:43.581 13:30:42 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:10:43.581 13:30:42 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:43.581 13:30:42 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:10:43.581 13:30:42 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:10:43.581 13:30:42 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:10:43.581 13:30:42 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:10:43.581 13:30:42 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:10:43.581 256+0 records in 00:10:43.581 256+0 records out 00:10:43.581 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0129568 s, 80.9 MB/s 00:10:43.581 13:30:43 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:43.581 13:30:43 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:10:43.581 256+0 records in 00:10:43.581 256+0 records out 00:10:43.581 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0292166 s, 35.9 MB/s 00:10:43.581 13:30:43 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:43.581 13:30:43 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:10:43.841 256+0 records in 00:10:43.841 256+0 records out 00:10:43.841 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0318868 s, 32.9 MB/s 00:10:43.841 13:30:43 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:10:43.841 13:30:43 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:43.841 13:30:43 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:10:43.841 13:30:43 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:10:43.841 13:30:43 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:10:43.841 13:30:43 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:10:43.841 13:30:43 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:10:43.841 13:30:43 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:43.841 13:30:43 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:10:43.841 13:30:43 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:43.841 13:30:43 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:10:43.841 13:30:43 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:10:43.841 13:30:43 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:10:43.841 13:30:43 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:43.841 13:30:43 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:43.841 13:30:43 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:10:43.841 13:30:43 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:10:43.841 13:30:43 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:43.841 13:30:43 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:10:43.841 13:30:43 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:10:43.841 13:30:43 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:10:43.841 13:30:43 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:10:43.841 13:30:43 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:43.841 13:30:43 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:43.841 13:30:43 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:10:43.841 13:30:43 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:10:43.841 13:30:43 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:10:43.841 13:30:43 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:43.841 13:30:43 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:10:44.100 13:30:43 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:10:44.101 13:30:43 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:10:44.101 13:30:43 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:10:44.101 13:30:43 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:44.101 13:30:43 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:44.101 13:30:43 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:10:44.101 13:30:43 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:10:44.101 13:30:43 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:10:44.101 13:30:43 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:10:44.101 13:30:43 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:44.101 13:30:43 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:44.360 13:30:43 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:10:44.360 13:30:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:10:44.360 13:30:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:44.360 13:30:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:10:44.360 13:30:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:44.360 13:30:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:10:44.360 13:30:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:10:44.360 13:30:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:10:44.360 13:30:43 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:10:44.360 13:30:43 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:10:44.360 13:30:43 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:10:44.360 13:30:43 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:10:44.360 13:30:43 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:10:44.927 13:30:44 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:10:46.358 [2024-11-20 13:30:45.393447] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:46.358 [2024-11-20 13:30:45.508278] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:46.358 [2024-11-20 13:30:45.508280] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:46.358 [2024-11-20 13:30:45.696910] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:10:46.358 [2024-11-20 13:30:45.697020] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:10:47.755 13:30:47 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:10:47.755 spdk_app_start Round 1 00:10:47.755 13:30:47 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:10:47.755 13:30:47 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58150 /var/tmp/spdk-nbd.sock 00:10:47.755 13:30:47 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58150 ']' 00:10:47.755 13:30:47 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:10:47.755 13:30:47 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:47.755 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:10:47.755 13:30:47 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:10:47.755 13:30:47 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:47.755 13:30:47 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:10:48.015 13:30:47 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:48.015 13:30:47 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:10:48.015 13:30:47 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:10:48.584 Malloc0 00:10:48.584 13:30:47 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:10:48.584 Malloc1 00:10:48.584 13:30:48 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:10:48.584 13:30:48 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:48.584 13:30:48 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:10:48.584 13:30:48 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:10:48.584 13:30:48 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:48.584 13:30:48 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:10:48.584 13:30:48 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:10:48.584 13:30:48 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:48.584 13:30:48 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:10:48.584 13:30:48 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:10:48.584 13:30:48 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:48.584 13:30:48 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:10:48.584 13:30:48 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:10:48.584 13:30:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:10:48.584 13:30:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:48.584 13:30:48 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:10:48.845 /dev/nbd0 00:10:48.845 13:30:48 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:10:48.845 13:30:48 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:10:48.845 13:30:48 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:10:48.845 13:30:48 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:10:48.845 13:30:48 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:48.845 13:30:48 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:48.845 13:30:48 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:10:48.845 13:30:48 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:10:48.845 13:30:48 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:48.845 13:30:48 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:48.845 13:30:48 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:10:48.845 1+0 records in 00:10:48.845 1+0 records out 00:10:48.845 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000380329 s, 10.8 MB/s 00:10:48.845 13:30:48 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:48.845 13:30:48 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:10:48.845 13:30:48 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:48.845 13:30:48 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:48.845 13:30:48 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:10:48.845 13:30:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:48.845 13:30:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:48.845 13:30:48 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:10:49.104 /dev/nbd1 00:10:49.104 13:30:48 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:10:49.104 13:30:48 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:10:49.104 13:30:48 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:10:49.104 13:30:48 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:10:49.104 13:30:48 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:49.104 13:30:48 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:49.104 13:30:48 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:10:49.363 13:30:48 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:10:49.363 13:30:48 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:49.363 13:30:48 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:49.363 13:30:48 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:10:49.363 1+0 records in 00:10:49.363 1+0 records out 00:10:49.363 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000369236 s, 11.1 MB/s 00:10:49.363 13:30:48 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:49.363 13:30:48 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:10:49.363 13:30:48 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:49.363 13:30:48 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:49.363 13:30:48 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:10:49.363 13:30:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:49.363 13:30:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:49.363 13:30:48 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:10:49.363 13:30:48 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:49.363 13:30:48 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:49.363 13:30:48 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:10:49.363 { 00:10:49.363 "nbd_device": "/dev/nbd0", 00:10:49.363 "bdev_name": "Malloc0" 00:10:49.363 }, 00:10:49.363 { 00:10:49.363 "nbd_device": "/dev/nbd1", 00:10:49.363 "bdev_name": "Malloc1" 00:10:49.363 } 00:10:49.363 ]' 00:10:49.363 13:30:48 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:10:49.363 { 00:10:49.363 "nbd_device": "/dev/nbd0", 00:10:49.363 "bdev_name": "Malloc0" 00:10:49.363 }, 00:10:49.363 { 00:10:49.363 "nbd_device": "/dev/nbd1", 00:10:49.363 "bdev_name": "Malloc1" 00:10:49.363 } 00:10:49.363 ]' 00:10:49.363 13:30:48 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:49.622 13:30:48 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:10:49.622 /dev/nbd1' 00:10:49.622 13:30:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:49.622 13:30:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:10:49.622 /dev/nbd1' 00:10:49.622 13:30:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:10:49.622 13:30:48 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:10:49.622 13:30:48 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:10:49.622 13:30:48 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:10:49.622 13:30:48 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:10:49.622 13:30:48 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:49.622 13:30:48 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:10:49.622 13:30:48 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:10:49.622 13:30:48 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:10:49.622 13:30:48 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:10:49.622 13:30:48 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:10:49.622 256+0 records in 00:10:49.622 256+0 records out 00:10:49.622 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0134575 s, 77.9 MB/s 00:10:49.622 13:30:48 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:49.622 13:30:48 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:10:49.622 256+0 records in 00:10:49.622 256+0 records out 00:10:49.622 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0260108 s, 40.3 MB/s 00:10:49.622 13:30:48 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:49.622 13:30:48 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:10:49.622 256+0 records in 00:10:49.622 256+0 records out 00:10:49.622 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0326795 s, 32.1 MB/s 00:10:49.622 13:30:48 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:10:49.622 13:30:48 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:49.622 13:30:48 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:10:49.622 13:30:48 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:10:49.622 13:30:48 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:10:49.622 13:30:48 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:10:49.622 13:30:48 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:10:49.622 13:30:48 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:49.622 13:30:48 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:10:49.622 13:30:48 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:49.622 13:30:48 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:10:49.622 13:30:49 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:10:49.622 13:30:49 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:10:49.622 13:30:49 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:49.622 13:30:49 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:49.622 13:30:49 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:10:49.622 13:30:49 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:10:49.622 13:30:49 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:49.622 13:30:49 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:10:49.881 13:30:49 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:10:49.881 13:30:49 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:10:49.881 13:30:49 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:10:49.881 13:30:49 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:49.881 13:30:49 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:49.881 13:30:49 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:10:49.881 13:30:49 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:10:49.881 13:30:49 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:10:49.881 13:30:49 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:49.881 13:30:49 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:10:50.139 13:30:49 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:10:50.139 13:30:49 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:10:50.139 13:30:49 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:10:50.139 13:30:49 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:50.139 13:30:49 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:50.139 13:30:49 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:10:50.139 13:30:49 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:10:50.139 13:30:49 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:10:50.139 13:30:49 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:10:50.139 13:30:49 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:50.139 13:30:49 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:50.398 13:30:49 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:10:50.398 13:30:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:10:50.398 13:30:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:50.398 13:30:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:10:50.398 13:30:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:10:50.398 13:30:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:50.398 13:30:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:10:50.398 13:30:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:10:50.398 13:30:49 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:10:50.398 13:30:49 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:10:50.398 13:30:49 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:10:50.398 13:30:49 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:10:50.398 13:30:49 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:10:50.657 13:30:50 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:10:52.039 [2024-11-20 13:30:51.295113] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:52.039 [2024-11-20 13:30:51.408521] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:52.039 [2024-11-20 13:30:51.408539] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:52.299 [2024-11-20 13:30:51.607388] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:10:52.299 [2024-11-20 13:30:51.607476] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:10:53.676 13:30:53 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:10:53.676 spdk_app_start Round 2 00:10:53.676 13:30:53 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:10:53.676 13:30:53 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58150 /var/tmp/spdk-nbd.sock 00:10:53.676 13:30:53 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58150 ']' 00:10:53.676 13:30:53 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:10:53.676 13:30:53 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:53.676 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:10:53.676 13:30:53 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:10:53.676 13:30:53 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:53.676 13:30:53 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:10:53.935 13:30:53 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:53.935 13:30:53 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:10:53.935 13:30:53 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:10:54.195 Malloc0 00:10:54.454 13:30:53 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:10:54.454 Malloc1 00:10:54.713 13:30:53 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:10:54.713 13:30:53 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:54.714 13:30:53 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:10:54.714 13:30:53 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:10:54.714 13:30:53 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:54.714 13:30:53 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:10:54.714 13:30:53 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:10:54.714 13:30:53 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:54.714 13:30:53 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:10:54.714 13:30:53 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:10:54.714 13:30:53 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:54.714 13:30:53 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:10:54.714 13:30:53 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:10:54.714 13:30:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:10:54.714 13:30:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:54.714 13:30:53 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:10:54.714 /dev/nbd0 00:10:54.714 13:30:54 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:10:54.714 13:30:54 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:10:54.714 13:30:54 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:10:54.714 13:30:54 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:10:54.714 13:30:54 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:54.714 13:30:54 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:54.714 13:30:54 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:10:54.714 13:30:54 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:10:54.714 13:30:54 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:54.714 13:30:54 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:54.714 13:30:54 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:10:54.714 1+0 records in 00:10:54.714 1+0 records out 00:10:54.714 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000315276 s, 13.0 MB/s 00:10:54.714 13:30:54 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:54.714 13:30:54 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:10:54.714 13:30:54 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:54.714 13:30:54 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:54.714 13:30:54 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:10:54.714 13:30:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:54.714 13:30:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:54.714 13:30:54 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:10:54.973 /dev/nbd1 00:10:54.973 13:30:54 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:10:54.973 13:30:54 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:10:54.973 13:30:54 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:10:54.973 13:30:54 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:10:54.973 13:30:54 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:54.973 13:30:54 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:54.973 13:30:54 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:10:54.973 13:30:54 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:10:54.973 13:30:54 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:54.973 13:30:54 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:54.973 13:30:54 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:10:54.973 1+0 records in 00:10:54.973 1+0 records out 00:10:54.973 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000474189 s, 8.6 MB/s 00:10:54.973 13:30:54 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:54.973 13:30:54 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:10:54.973 13:30:54 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:54.973 13:30:54 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:54.973 13:30:54 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:10:54.973 13:30:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:54.973 13:30:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:55.232 13:30:54 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:10:55.232 13:30:54 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:55.232 13:30:54 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:55.232 13:30:54 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:10:55.232 { 00:10:55.232 "nbd_device": "/dev/nbd0", 00:10:55.232 "bdev_name": "Malloc0" 00:10:55.232 }, 00:10:55.232 { 00:10:55.232 "nbd_device": "/dev/nbd1", 00:10:55.232 "bdev_name": "Malloc1" 00:10:55.232 } 00:10:55.232 ]' 00:10:55.232 13:30:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:55.232 13:30:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:10:55.232 { 00:10:55.232 "nbd_device": "/dev/nbd0", 00:10:55.232 "bdev_name": "Malloc0" 00:10:55.232 }, 00:10:55.232 { 00:10:55.232 "nbd_device": "/dev/nbd1", 00:10:55.232 "bdev_name": "Malloc1" 00:10:55.232 } 00:10:55.232 ]' 00:10:55.232 13:30:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:10:55.232 /dev/nbd1' 00:10:55.232 13:30:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:10:55.232 /dev/nbd1' 00:10:55.232 13:30:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:55.232 13:30:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:10:55.232 13:30:54 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:10:55.232 13:30:54 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:10:55.232 13:30:54 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:10:55.232 13:30:54 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:10:55.232 13:30:54 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:55.232 13:30:54 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:10:55.232 13:30:54 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:10:55.232 13:30:54 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:10:55.232 13:30:54 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:10:55.232 13:30:54 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:10:55.492 256+0 records in 00:10:55.492 256+0 records out 00:10:55.492 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00626663 s, 167 MB/s 00:10:55.492 13:30:54 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:55.492 13:30:54 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:10:55.492 256+0 records in 00:10:55.492 256+0 records out 00:10:55.492 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0461391 s, 22.7 MB/s 00:10:55.492 13:30:54 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:55.492 13:30:54 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:10:55.492 256+0 records in 00:10:55.492 256+0 records out 00:10:55.492 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0352929 s, 29.7 MB/s 00:10:55.492 13:30:54 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:10:55.492 13:30:54 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:55.492 13:30:54 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:10:55.492 13:30:54 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:10:55.492 13:30:54 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:10:55.492 13:30:54 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:10:55.492 13:30:54 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:10:55.492 13:30:54 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:55.492 13:30:54 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:10:55.492 13:30:54 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:55.492 13:30:54 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:10:55.492 13:30:54 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:10:55.492 13:30:54 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:10:55.492 13:30:54 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:55.492 13:30:54 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:55.492 13:30:54 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:10:55.492 13:30:54 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:10:55.492 13:30:54 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:55.492 13:30:54 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:10:55.753 13:30:55 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:10:55.753 13:30:55 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:10:55.753 13:30:55 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:10:55.753 13:30:55 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:55.753 13:30:55 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:55.753 13:30:55 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:10:55.753 13:30:55 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:10:55.753 13:30:55 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:10:55.753 13:30:55 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:55.753 13:30:55 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:10:56.011 13:30:55 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:10:56.011 13:30:55 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:10:56.011 13:30:55 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:10:56.011 13:30:55 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:56.011 13:30:55 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:56.011 13:30:55 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:10:56.011 13:30:55 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:10:56.011 13:30:55 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:10:56.011 13:30:55 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:10:56.011 13:30:55 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:56.011 13:30:55 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:56.271 13:30:55 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:10:56.271 13:30:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:10:56.271 13:30:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:56.271 13:30:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:10:56.271 13:30:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:10:56.271 13:30:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:56.271 13:30:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:10:56.271 13:30:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:10:56.271 13:30:55 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:10:56.271 13:30:55 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:10:56.271 13:30:55 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:10:56.271 13:30:55 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:10:56.271 13:30:55 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:10:56.530 13:30:55 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:10:57.907 [2024-11-20 13:30:57.214737] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:57.907 [2024-11-20 13:30:57.340153] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:57.907 [2024-11-20 13:30:57.340154] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:58.166 [2024-11-20 13:30:57.553770] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:10:58.166 [2024-11-20 13:30:57.553908] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:10:59.540 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:10:59.540 13:30:58 event.app_repeat -- event/event.sh@38 -- # waitforlisten 58150 /var/tmp/spdk-nbd.sock 00:10:59.540 13:30:58 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58150 ']' 00:10:59.540 13:30:58 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:10:59.540 13:30:58 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:59.540 13:30:58 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:10:59.540 13:30:58 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:59.540 13:30:58 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:10:59.799 13:30:59 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:59.799 13:30:59 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:10:59.799 13:30:59 event.app_repeat -- event/event.sh@39 -- # killprocess 58150 00:10:59.799 13:30:59 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 58150 ']' 00:10:59.799 13:30:59 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 58150 00:10:59.799 13:30:59 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:10:59.799 13:30:59 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:59.799 13:30:59 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58150 00:11:00.058 killing process with pid 58150 00:11:00.058 13:30:59 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:00.058 13:30:59 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:00.058 13:30:59 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58150' 00:11:00.058 13:30:59 event.app_repeat -- common/autotest_common.sh@973 -- # kill 58150 00:11:00.058 13:30:59 event.app_repeat -- common/autotest_common.sh@978 -- # wait 58150 00:11:01.017 spdk_app_start is called in Round 0. 00:11:01.017 Shutdown signal received, stop current app iteration 00:11:01.017 Starting SPDK v25.01-pre git sha1 82b85d9ca / DPDK 24.03.0 reinitialization... 00:11:01.017 spdk_app_start is called in Round 1. 00:11:01.017 Shutdown signal received, stop current app iteration 00:11:01.017 Starting SPDK v25.01-pre git sha1 82b85d9ca / DPDK 24.03.0 reinitialization... 00:11:01.017 spdk_app_start is called in Round 2. 00:11:01.017 Shutdown signal received, stop current app iteration 00:11:01.017 Starting SPDK v25.01-pre git sha1 82b85d9ca / DPDK 24.03.0 reinitialization... 00:11:01.017 spdk_app_start is called in Round 3. 00:11:01.017 Shutdown signal received, stop current app iteration 00:11:01.017 13:31:00 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:11:01.017 13:31:00 event.app_repeat -- event/event.sh@42 -- # return 0 00:11:01.017 00:11:01.017 real 0m19.846s 00:11:01.017 user 0m42.259s 00:11:01.017 sys 0m3.219s 00:11:01.017 13:31:00 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:01.017 13:31:00 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:11:01.017 ************************************ 00:11:01.017 END TEST app_repeat 00:11:01.017 ************************************ 00:11:01.275 13:31:00 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:11:01.275 13:31:00 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:11:01.275 13:31:00 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:01.275 13:31:00 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:01.275 13:31:00 event -- common/autotest_common.sh@10 -- # set +x 00:11:01.275 ************************************ 00:11:01.275 START TEST cpu_locks 00:11:01.275 ************************************ 00:11:01.275 13:31:00 event.cpu_locks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:11:01.275 * Looking for test storage... 00:11:01.275 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:11:01.275 13:31:00 event.cpu_locks -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:01.275 13:31:00 event.cpu_locks -- common/autotest_common.sh@1693 -- # lcov --version 00:11:01.275 13:31:00 event.cpu_locks -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:01.275 13:31:00 event.cpu_locks -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:01.275 13:31:00 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:01.275 13:31:00 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:01.275 13:31:00 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:01.275 13:31:00 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:11:01.275 13:31:00 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:11:01.275 13:31:00 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:11:01.275 13:31:00 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:11:01.275 13:31:00 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:11:01.275 13:31:00 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:11:01.275 13:31:00 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:11:01.276 13:31:00 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:01.276 13:31:00 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:11:01.276 13:31:00 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:11:01.276 13:31:00 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:01.276 13:31:00 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:01.276 13:31:00 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:11:01.276 13:31:00 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:11:01.276 13:31:00 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:01.276 13:31:00 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:11:01.276 13:31:00 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:11:01.276 13:31:00 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:11:01.276 13:31:00 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:11:01.276 13:31:00 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:01.276 13:31:00 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:11:01.276 13:31:00 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:11:01.535 13:31:00 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:01.535 13:31:00 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:01.535 13:31:00 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:11:01.535 13:31:00 event.cpu_locks -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:01.535 13:31:00 event.cpu_locks -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:01.535 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:01.535 --rc genhtml_branch_coverage=1 00:11:01.535 --rc genhtml_function_coverage=1 00:11:01.535 --rc genhtml_legend=1 00:11:01.535 --rc geninfo_all_blocks=1 00:11:01.535 --rc geninfo_unexecuted_blocks=1 00:11:01.535 00:11:01.535 ' 00:11:01.535 13:31:00 event.cpu_locks -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:01.535 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:01.535 --rc genhtml_branch_coverage=1 00:11:01.535 --rc genhtml_function_coverage=1 00:11:01.535 --rc genhtml_legend=1 00:11:01.535 --rc geninfo_all_blocks=1 00:11:01.535 --rc geninfo_unexecuted_blocks=1 00:11:01.535 00:11:01.535 ' 00:11:01.535 13:31:00 event.cpu_locks -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:01.535 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:01.535 --rc genhtml_branch_coverage=1 00:11:01.535 --rc genhtml_function_coverage=1 00:11:01.535 --rc genhtml_legend=1 00:11:01.535 --rc geninfo_all_blocks=1 00:11:01.535 --rc geninfo_unexecuted_blocks=1 00:11:01.535 00:11:01.535 ' 00:11:01.535 13:31:00 event.cpu_locks -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:01.535 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:01.535 --rc genhtml_branch_coverage=1 00:11:01.535 --rc genhtml_function_coverage=1 00:11:01.535 --rc genhtml_legend=1 00:11:01.535 --rc geninfo_all_blocks=1 00:11:01.535 --rc geninfo_unexecuted_blocks=1 00:11:01.535 00:11:01.535 ' 00:11:01.535 13:31:00 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:11:01.535 13:31:00 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:11:01.535 13:31:00 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:11:01.535 13:31:00 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:11:01.535 13:31:00 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:01.535 13:31:00 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:01.535 13:31:00 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:11:01.535 ************************************ 00:11:01.535 START TEST default_locks 00:11:01.535 ************************************ 00:11:01.535 13:31:00 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:11:01.535 13:31:00 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=58597 00:11:01.535 13:31:00 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:11:01.535 13:31:00 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 58597 00:11:01.535 13:31:00 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 58597 ']' 00:11:01.535 13:31:00 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:01.536 13:31:00 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:01.536 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:01.536 13:31:00 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:01.536 13:31:00 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:01.536 13:31:00 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:11:01.536 [2024-11-20 13:31:00.895345] Starting SPDK v25.01-pre git sha1 82b85d9ca / DPDK 24.03.0 initialization... 00:11:01.536 [2024-11-20 13:31:00.895479] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58597 ] 00:11:01.794 [2024-11-20 13:31:01.066429] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:01.794 [2024-11-20 13:31:01.196286] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:02.730 13:31:02 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:02.730 13:31:02 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:11:02.730 13:31:02 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 58597 00:11:02.730 13:31:02 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 58597 00:11:02.730 13:31:02 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:11:03.295 13:31:02 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 58597 00:11:03.295 13:31:02 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 58597 ']' 00:11:03.295 13:31:02 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 58597 00:11:03.295 13:31:02 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:11:03.295 13:31:02 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:03.295 13:31:02 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58597 00:11:03.295 13:31:02 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:03.295 13:31:02 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:03.295 killing process with pid 58597 00:11:03.295 13:31:02 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58597' 00:11:03.295 13:31:02 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 58597 00:11:03.295 13:31:02 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 58597 00:11:05.827 13:31:05 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 58597 00:11:05.827 13:31:05 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:11:05.827 13:31:05 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 58597 00:11:05.827 13:31:05 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:11:05.827 13:31:05 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:05.827 13:31:05 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:11:05.827 13:31:05 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:05.827 13:31:05 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 58597 00:11:05.827 13:31:05 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 58597 ']' 00:11:05.827 13:31:05 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:05.828 13:31:05 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:05.828 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:05.828 13:31:05 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:05.828 13:31:05 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:05.828 13:31:05 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:11:05.828 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (58597) - No such process 00:11:05.828 ERROR: process (pid: 58597) is no longer running 00:11:05.828 13:31:05 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:05.828 13:31:05 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:11:05.828 13:31:05 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:11:05.828 13:31:05 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:05.828 13:31:05 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:05.828 13:31:05 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:05.828 13:31:05 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:11:05.828 13:31:05 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:11:05.828 13:31:05 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:11:05.828 13:31:05 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:11:05.828 00:11:05.828 real 0m4.270s 00:11:05.828 user 0m4.267s 00:11:05.828 sys 0m0.689s 00:11:05.828 13:31:05 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:05.828 13:31:05 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:11:05.828 ************************************ 00:11:05.828 END TEST default_locks 00:11:05.828 ************************************ 00:11:05.828 13:31:05 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:11:05.828 13:31:05 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:05.828 13:31:05 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:05.828 13:31:05 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:11:05.828 ************************************ 00:11:05.828 START TEST default_locks_via_rpc 00:11:05.828 ************************************ 00:11:05.828 13:31:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:11:05.828 13:31:05 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=58678 00:11:05.828 13:31:05 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 58678 00:11:05.828 13:31:05 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:11:05.828 13:31:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 58678 ']' 00:11:05.828 13:31:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:05.828 13:31:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:05.828 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:05.828 13:31:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:05.828 13:31:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:05.828 13:31:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:05.828 [2024-11-20 13:31:05.239045] Starting SPDK v25.01-pre git sha1 82b85d9ca / DPDK 24.03.0 initialization... 00:11:05.828 [2024-11-20 13:31:05.239195] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58678 ] 00:11:06.086 [2024-11-20 13:31:05.414682] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:06.086 [2024-11-20 13:31:05.540006] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:07.025 13:31:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:07.025 13:31:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:11:07.025 13:31:06 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:11:07.025 13:31:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.025 13:31:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:07.025 13:31:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.025 13:31:06 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:11:07.025 13:31:06 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:11:07.025 13:31:06 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:11:07.025 13:31:06 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:11:07.025 13:31:06 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:11:07.025 13:31:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.025 13:31:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:07.025 13:31:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.025 13:31:06 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 58678 00:11:07.025 13:31:06 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 58678 00:11:07.025 13:31:06 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:11:07.593 13:31:06 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 58678 00:11:07.593 13:31:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 58678 ']' 00:11:07.593 13:31:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 58678 00:11:07.593 13:31:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:11:07.593 13:31:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:07.593 13:31:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58678 00:11:07.593 killing process with pid 58678 00:11:07.593 13:31:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:07.593 13:31:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:07.593 13:31:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58678' 00:11:07.593 13:31:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 58678 00:11:07.593 13:31:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 58678 00:11:10.128 ************************************ 00:11:10.128 END TEST default_locks_via_rpc 00:11:10.128 ************************************ 00:11:10.128 00:11:10.128 real 0m4.340s 00:11:10.128 user 0m4.361s 00:11:10.128 sys 0m0.679s 00:11:10.128 13:31:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:10.128 13:31:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:10.128 13:31:09 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:11:10.128 13:31:09 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:10.128 13:31:09 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:10.128 13:31:09 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:11:10.128 ************************************ 00:11:10.128 START TEST non_locking_app_on_locked_coremask 00:11:10.128 ************************************ 00:11:10.128 13:31:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:11:10.128 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:10.128 13:31:09 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=58752 00:11:10.128 13:31:09 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:11:10.128 13:31:09 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 58752 /var/tmp/spdk.sock 00:11:10.128 13:31:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58752 ']' 00:11:10.128 13:31:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:10.128 13:31:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:10.128 13:31:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:10.128 13:31:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:10.128 13:31:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:10.385 [2024-11-20 13:31:09.620190] Starting SPDK v25.01-pre git sha1 82b85d9ca / DPDK 24.03.0 initialization... 00:11:10.385 [2024-11-20 13:31:09.620369] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58752 ] 00:11:10.385 [2024-11-20 13:31:09.797120] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:10.643 [2024-11-20 13:31:09.936712] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:11.581 13:31:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:11.581 13:31:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:11:11.581 13:31:10 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=58773 00:11:11.581 13:31:10 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 58773 /var/tmp/spdk2.sock 00:11:11.581 13:31:10 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:11:11.581 13:31:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58773 ']' 00:11:11.581 13:31:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:11:11.582 13:31:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:11.582 13:31:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:11:11.582 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:11:11.582 13:31:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:11.582 13:31:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:11.582 [2024-11-20 13:31:10.930010] Starting SPDK v25.01-pre git sha1 82b85d9ca / DPDK 24.03.0 initialization... 00:11:11.582 [2024-11-20 13:31:10.930385] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58773 ] 00:11:11.840 [2024-11-20 13:31:11.111562] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:11:11.840 [2024-11-20 13:31:11.111638] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:12.099 [2024-11-20 13:31:11.360707] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:14.635 13:31:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:14.635 13:31:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:11:14.635 13:31:13 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 58752 00:11:14.635 13:31:13 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 58752 00:11:14.635 13:31:13 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:11:14.895 13:31:14 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 58752 00:11:14.895 13:31:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 58752 ']' 00:11:14.895 13:31:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 58752 00:11:14.895 13:31:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:11:14.895 13:31:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:14.895 13:31:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58752 00:11:15.154 13:31:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:15.154 13:31:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:15.154 killing process with pid 58752 00:11:15.154 13:31:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58752' 00:11:15.154 13:31:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 58752 00:11:15.154 13:31:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 58752 00:11:20.422 13:31:19 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 58773 00:11:20.422 13:31:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 58773 ']' 00:11:20.422 13:31:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 58773 00:11:20.422 13:31:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:11:20.422 13:31:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:20.422 13:31:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58773 00:11:20.422 killing process with pid 58773 00:11:20.422 13:31:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:20.422 13:31:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:20.422 13:31:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58773' 00:11:20.422 13:31:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 58773 00:11:20.422 13:31:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 58773 00:11:22.355 ************************************ 00:11:22.355 END TEST non_locking_app_on_locked_coremask 00:11:22.355 ************************************ 00:11:22.355 00:11:22.355 real 0m12.179s 00:11:22.355 user 0m12.721s 00:11:22.355 sys 0m1.364s 00:11:22.355 13:31:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:22.355 13:31:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:22.355 13:31:21 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:11:22.355 13:31:21 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:22.355 13:31:21 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:22.355 13:31:21 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:11:22.355 ************************************ 00:11:22.355 START TEST locking_app_on_unlocked_coremask 00:11:22.355 ************************************ 00:11:22.355 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:22.355 13:31:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:11:22.355 13:31:21 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=58930 00:11:22.355 13:31:21 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 58930 /var/tmp/spdk.sock 00:11:22.355 13:31:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58930 ']' 00:11:22.355 13:31:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:22.355 13:31:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:22.355 13:31:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:22.355 13:31:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:22.355 13:31:21 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:11:22.355 13:31:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:22.670 [2024-11-20 13:31:21.853961] Starting SPDK v25.01-pre git sha1 82b85d9ca / DPDK 24.03.0 initialization... 00:11:22.670 [2024-11-20 13:31:21.854108] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58930 ] 00:11:22.670 [2024-11-20 13:31:22.037508] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:11:22.670 [2024-11-20 13:31:22.037611] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:22.930 [2024-11-20 13:31:22.161010] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:23.864 13:31:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:23.864 13:31:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:11:23.864 13:31:23 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:11:23.864 13:31:23 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=58946 00:11:23.864 13:31:23 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 58946 /var/tmp/spdk2.sock 00:11:23.864 13:31:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58946 ']' 00:11:23.864 13:31:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:11:23.864 13:31:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:23.864 13:31:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:11:23.864 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:11:23.864 13:31:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:23.864 13:31:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:23.864 [2024-11-20 13:31:23.167219] Starting SPDK v25.01-pre git sha1 82b85d9ca / DPDK 24.03.0 initialization... 00:11:23.864 [2024-11-20 13:31:23.167565] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58946 ] 00:11:24.121 [2024-11-20 13:31:23.358181] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:24.380 [2024-11-20 13:31:23.613935] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:26.910 13:31:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:26.910 13:31:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:11:26.910 13:31:25 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 58946 00:11:26.910 13:31:25 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 58946 00:11:26.910 13:31:25 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:11:27.479 13:31:26 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 58930 00:11:27.479 13:31:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 58930 ']' 00:11:27.479 13:31:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 58930 00:11:27.479 13:31:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:11:27.479 13:31:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:27.479 13:31:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58930 00:11:27.479 killing process with pid 58930 00:11:27.479 13:31:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:27.479 13:31:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:27.479 13:31:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58930' 00:11:27.479 13:31:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 58930 00:11:27.479 13:31:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 58930 00:11:32.752 13:31:31 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 58946 00:11:32.752 13:31:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 58946 ']' 00:11:32.752 13:31:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 58946 00:11:32.752 13:31:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:11:32.752 13:31:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:32.752 13:31:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58946 00:11:32.752 killing process with pid 58946 00:11:32.752 13:31:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:32.752 13:31:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:32.752 13:31:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58946' 00:11:32.752 13:31:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 58946 00:11:32.752 13:31:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 58946 00:11:34.655 00:11:34.655 real 0m12.345s 00:11:34.655 user 0m12.929s 00:11:34.655 sys 0m1.371s 00:11:34.656 13:31:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:34.656 ************************************ 00:11:34.656 END TEST locking_app_on_unlocked_coremask 00:11:34.656 ************************************ 00:11:34.656 13:31:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:34.915 13:31:34 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:11:34.915 13:31:34 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:34.915 13:31:34 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:34.915 13:31:34 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:11:34.915 ************************************ 00:11:34.915 START TEST locking_app_on_locked_coremask 00:11:34.915 ************************************ 00:11:34.915 13:31:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:11:34.915 13:31:34 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:11:34.915 13:31:34 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=59107 00:11:34.915 13:31:34 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 59107 /var/tmp/spdk.sock 00:11:34.915 13:31:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59107 ']' 00:11:34.915 13:31:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:34.915 13:31:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:34.915 13:31:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:34.915 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:34.915 13:31:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:34.915 13:31:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:34.915 [2024-11-20 13:31:34.263106] Starting SPDK v25.01-pre git sha1 82b85d9ca / DPDK 24.03.0 initialization... 00:11:34.915 [2024-11-20 13:31:34.263234] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59107 ] 00:11:35.173 [2024-11-20 13:31:34.437166] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:35.173 [2024-11-20 13:31:34.553916] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:36.111 13:31:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:36.111 13:31:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:11:36.111 13:31:35 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=59123 00:11:36.111 13:31:35 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 59123 /var/tmp/spdk2.sock 00:11:36.111 13:31:35 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:11:36.111 13:31:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:11:36.111 13:31:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59123 /var/tmp/spdk2.sock 00:11:36.111 13:31:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:11:36.111 13:31:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:36.111 13:31:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:11:36.111 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:11:36.111 13:31:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:36.111 13:31:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 59123 /var/tmp/spdk2.sock 00:11:36.111 13:31:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59123 ']' 00:11:36.111 13:31:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:11:36.111 13:31:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:36.111 13:31:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:11:36.111 13:31:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:36.111 13:31:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:36.112 [2024-11-20 13:31:35.539669] Starting SPDK v25.01-pre git sha1 82b85d9ca / DPDK 24.03.0 initialization... 00:11:36.112 [2024-11-20 13:31:35.539795] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59123 ] 00:11:36.370 [2024-11-20 13:31:35.722550] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 59107 has claimed it. 00:11:36.370 [2024-11-20 13:31:35.722646] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:11:36.939 ERROR: process (pid: 59123) is no longer running 00:11:36.939 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59123) - No such process 00:11:36.939 13:31:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:36.939 13:31:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:11:36.939 13:31:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:11:36.939 13:31:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:36.939 13:31:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:36.939 13:31:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:36.939 13:31:36 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 59107 00:11:36.939 13:31:36 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59107 00:11:36.939 13:31:36 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:11:37.199 13:31:36 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 59107 00:11:37.199 13:31:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59107 ']' 00:11:37.199 13:31:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 59107 00:11:37.199 13:31:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:11:37.199 13:31:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:37.199 13:31:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59107 00:11:37.199 killing process with pid 59107 00:11:37.199 13:31:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:37.199 13:31:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:37.199 13:31:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59107' 00:11:37.199 13:31:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 59107 00:11:37.199 13:31:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 59107 00:11:39.868 00:11:39.868 real 0m4.876s 00:11:39.868 user 0m5.060s 00:11:39.868 sys 0m0.864s 00:11:39.868 13:31:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:39.868 13:31:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:39.868 ************************************ 00:11:39.868 END TEST locking_app_on_locked_coremask 00:11:39.868 ************************************ 00:11:39.868 13:31:39 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:11:39.868 13:31:39 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:39.868 13:31:39 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:39.868 13:31:39 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:11:39.868 ************************************ 00:11:39.868 START TEST locking_overlapped_coremask 00:11:39.868 ************************************ 00:11:39.868 13:31:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:11:39.868 13:31:39 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=59193 00:11:39.868 13:31:39 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:11:39.868 13:31:39 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 59193 /var/tmp/spdk.sock 00:11:39.868 13:31:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 59193 ']' 00:11:39.868 13:31:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:39.868 13:31:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:39.868 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:39.868 13:31:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:39.868 13:31:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:39.868 13:31:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:39.868 [2024-11-20 13:31:39.219643] Starting SPDK v25.01-pre git sha1 82b85d9ca / DPDK 24.03.0 initialization... 00:11:39.868 [2024-11-20 13:31:39.219973] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59193 ] 00:11:40.127 [2024-11-20 13:31:39.404421] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:40.127 [2024-11-20 13:31:39.533906] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:40.127 [2024-11-20 13:31:39.534047] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:40.127 [2024-11-20 13:31:39.534127] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:41.063 13:31:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:41.063 13:31:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:11:41.063 13:31:40 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=59216 00:11:41.063 13:31:40 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:11:41.063 13:31:40 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 59216 /var/tmp/spdk2.sock 00:11:41.063 13:31:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:11:41.063 13:31:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59216 /var/tmp/spdk2.sock 00:11:41.063 13:31:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:11:41.063 13:31:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:41.063 13:31:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:11:41.063 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:11:41.063 13:31:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:41.063 13:31:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 59216 /var/tmp/spdk2.sock 00:11:41.063 13:31:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 59216 ']' 00:11:41.063 13:31:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:11:41.063 13:31:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:41.063 13:31:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:11:41.063 13:31:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:41.063 13:31:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:41.063 [2024-11-20 13:31:40.528670] Starting SPDK v25.01-pre git sha1 82b85d9ca / DPDK 24.03.0 initialization... 00:11:41.063 [2024-11-20 13:31:40.528797] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59216 ] 00:11:41.322 [2024-11-20 13:31:40.713604] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59193 has claimed it. 00:11:41.322 [2024-11-20 13:31:40.713683] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:11:41.888 ERROR: process (pid: 59216) is no longer running 00:11:41.888 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59216) - No such process 00:11:41.888 13:31:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:41.889 13:31:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:11:41.889 13:31:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:11:41.889 13:31:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:41.889 13:31:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:41.889 13:31:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:41.889 13:31:41 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:11:41.889 13:31:41 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:11:41.889 13:31:41 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:11:41.889 13:31:41 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:11:41.889 13:31:41 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 59193 00:11:41.889 13:31:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 59193 ']' 00:11:41.889 13:31:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 59193 00:11:41.889 13:31:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:11:41.889 13:31:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:41.889 13:31:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59193 00:11:41.889 13:31:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:41.889 13:31:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:41.889 13:31:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59193' 00:11:41.889 killing process with pid 59193 00:11:41.889 13:31:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 59193 00:11:41.889 13:31:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 59193 00:11:44.421 00:11:44.421 real 0m4.512s 00:11:44.421 user 0m12.153s 00:11:44.421 sys 0m0.652s 00:11:44.421 13:31:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:44.421 13:31:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:44.421 ************************************ 00:11:44.421 END TEST locking_overlapped_coremask 00:11:44.421 ************************************ 00:11:44.421 13:31:43 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:11:44.421 13:31:43 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:44.421 13:31:43 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:44.421 13:31:43 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:11:44.421 ************************************ 00:11:44.421 START TEST locking_overlapped_coremask_via_rpc 00:11:44.421 ************************************ 00:11:44.421 13:31:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:11:44.421 13:31:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=59280 00:11:44.421 13:31:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:11:44.421 13:31:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 59280 /var/tmp/spdk.sock 00:11:44.421 13:31:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59280 ']' 00:11:44.421 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:44.421 13:31:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:44.421 13:31:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:44.421 13:31:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:44.421 13:31:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:44.421 13:31:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:44.421 [2024-11-20 13:31:43.795811] Starting SPDK v25.01-pre git sha1 82b85d9ca / DPDK 24.03.0 initialization... 00:11:44.421 [2024-11-20 13:31:43.795941] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59280 ] 00:11:44.681 [2024-11-20 13:31:43.977872] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:11:44.681 [2024-11-20 13:31:43.977936] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:44.681 [2024-11-20 13:31:44.099920] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:44.681 [2024-11-20 13:31:44.100089] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:44.681 [2024-11-20 13:31:44.100153] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:45.625 13:31:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:45.625 13:31:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:11:45.625 13:31:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=59298 00:11:45.625 13:31:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 59298 /var/tmp/spdk2.sock 00:11:45.625 13:31:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:11:45.625 13:31:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59298 ']' 00:11:45.625 13:31:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:11:45.625 13:31:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:45.625 13:31:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:11:45.625 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:11:45.625 13:31:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:45.625 13:31:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:45.625 [2024-11-20 13:31:45.078437] Starting SPDK v25.01-pre git sha1 82b85d9ca / DPDK 24.03.0 initialization... 00:11:45.625 [2024-11-20 13:31:45.078798] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59298 ] 00:11:45.885 [2024-11-20 13:31:45.265383] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:11:45.885 [2024-11-20 13:31:45.265467] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:46.144 [2024-11-20 13:31:45.510877] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:46.144 [2024-11-20 13:31:45.510947] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:46.144 [2024-11-20 13:31:45.510971] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:11:48.683 13:31:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:48.683 13:31:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:11:48.683 13:31:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:11:48.683 13:31:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.683 13:31:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:48.683 13:31:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.683 13:31:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:11:48.683 13:31:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:11:48.683 13:31:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:11:48.683 13:31:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:11:48.683 13:31:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:48.684 13:31:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:11:48.684 13:31:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:48.684 13:31:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:11:48.684 13:31:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.684 13:31:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:48.684 [2024-11-20 13:31:47.808349] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59280 has claimed it. 00:11:48.684 request: 00:11:48.684 { 00:11:48.684 "method": "framework_enable_cpumask_locks", 00:11:48.684 "req_id": 1 00:11:48.684 } 00:11:48.684 Got JSON-RPC error response 00:11:48.684 response: 00:11:48.684 { 00:11:48.684 "code": -32603, 00:11:48.684 "message": "Failed to claim CPU core: 2" 00:11:48.684 } 00:11:48.684 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:48.684 13:31:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:11:48.684 13:31:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:11:48.684 13:31:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:48.684 13:31:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:48.684 13:31:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:48.684 13:31:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 59280 /var/tmp/spdk.sock 00:11:48.684 13:31:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59280 ']' 00:11:48.684 13:31:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:48.684 13:31:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:48.684 13:31:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:48.684 13:31:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:48.684 13:31:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:48.684 13:31:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:48.684 13:31:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:11:48.684 13:31:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 59298 /var/tmp/spdk2.sock 00:11:48.684 13:31:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59298 ']' 00:11:48.684 13:31:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:11:48.684 13:31:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:48.684 13:31:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:11:48.684 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:11:48.684 13:31:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:48.684 13:31:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:48.944 13:31:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:48.944 13:31:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:11:48.944 13:31:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:11:48.944 13:31:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:11:48.944 13:31:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:11:48.944 13:31:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:11:48.944 00:11:48.944 real 0m4.634s 00:11:48.944 user 0m1.445s 00:11:48.944 sys 0m0.288s 00:11:48.944 13:31:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:48.944 13:31:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:48.944 ************************************ 00:11:48.944 END TEST locking_overlapped_coremask_via_rpc 00:11:48.944 ************************************ 00:11:48.944 13:31:48 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:11:48.944 13:31:48 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59280 ]] 00:11:48.944 13:31:48 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59280 00:11:48.944 13:31:48 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59280 ']' 00:11:48.944 13:31:48 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59280 00:11:48.944 13:31:48 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:11:48.944 13:31:48 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:48.944 13:31:48 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59280 00:11:49.203 killing process with pid 59280 00:11:49.203 13:31:48 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:49.203 13:31:48 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:49.203 13:31:48 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59280' 00:11:49.203 13:31:48 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 59280 00:11:49.203 13:31:48 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 59280 00:11:51.738 13:31:51 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59298 ]] 00:11:51.738 13:31:51 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59298 00:11:51.738 13:31:51 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59298 ']' 00:11:51.738 13:31:51 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59298 00:11:51.738 13:31:51 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:11:51.738 13:31:51 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:51.738 13:31:51 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59298 00:11:51.738 13:31:51 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:11:51.738 13:31:51 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:11:51.738 killing process with pid 59298 00:11:51.738 13:31:51 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59298' 00:11:51.738 13:31:51 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 59298 00:11:51.738 13:31:51 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 59298 00:11:54.269 13:31:53 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:11:54.269 Process with pid 59280 is not found 00:11:54.269 Process with pid 59298 is not found 00:11:54.269 13:31:53 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:11:54.269 13:31:53 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59280 ]] 00:11:54.269 13:31:53 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59280 00:11:54.269 13:31:53 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59280 ']' 00:11:54.269 13:31:53 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59280 00:11:54.269 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (59280) - No such process 00:11:54.269 13:31:53 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 59280 is not found' 00:11:54.269 13:31:53 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59298 ]] 00:11:54.269 13:31:53 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59298 00:11:54.269 13:31:53 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59298 ']' 00:11:54.269 13:31:53 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59298 00:11:54.269 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (59298) - No such process 00:11:54.269 13:31:53 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 59298 is not found' 00:11:54.269 13:31:53 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:11:54.269 ************************************ 00:11:54.269 END TEST cpu_locks 00:11:54.269 ************************************ 00:11:54.269 00:11:54.269 real 0m53.176s 00:11:54.269 user 1m31.863s 00:11:54.269 sys 0m7.216s 00:11:54.269 13:31:53 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:54.269 13:31:53 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:11:54.269 ************************************ 00:11:54.269 END TEST event 00:11:54.269 ************************************ 00:11:54.269 00:11:54.269 real 1m24.320s 00:11:54.269 user 2m31.453s 00:11:54.269 sys 0m11.727s 00:11:54.269 13:31:53 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:54.269 13:31:53 event -- common/autotest_common.sh@10 -- # set +x 00:11:54.527 13:31:53 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:11:54.527 13:31:53 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:54.527 13:31:53 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:54.527 13:31:53 -- common/autotest_common.sh@10 -- # set +x 00:11:54.527 ************************************ 00:11:54.527 START TEST thread 00:11:54.527 ************************************ 00:11:54.527 13:31:53 thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:11:54.527 * Looking for test storage... 00:11:54.527 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:11:54.527 13:31:53 thread -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:54.527 13:31:53 thread -- common/autotest_common.sh@1693 -- # lcov --version 00:11:54.527 13:31:53 thread -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:54.527 13:31:53 thread -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:54.527 13:31:53 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:54.527 13:31:53 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:54.527 13:31:53 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:54.527 13:31:53 thread -- scripts/common.sh@336 -- # IFS=.-: 00:11:54.527 13:31:53 thread -- scripts/common.sh@336 -- # read -ra ver1 00:11:54.527 13:31:53 thread -- scripts/common.sh@337 -- # IFS=.-: 00:11:54.527 13:31:54 thread -- scripts/common.sh@337 -- # read -ra ver2 00:11:54.527 13:31:54 thread -- scripts/common.sh@338 -- # local 'op=<' 00:11:54.527 13:31:54 thread -- scripts/common.sh@340 -- # ver1_l=2 00:11:54.527 13:31:54 thread -- scripts/common.sh@341 -- # ver2_l=1 00:11:54.527 13:31:54 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:54.527 13:31:54 thread -- scripts/common.sh@344 -- # case "$op" in 00:11:54.527 13:31:54 thread -- scripts/common.sh@345 -- # : 1 00:11:54.527 13:31:54 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:54.527 13:31:54 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:54.527 13:31:54 thread -- scripts/common.sh@365 -- # decimal 1 00:11:54.527 13:31:54 thread -- scripts/common.sh@353 -- # local d=1 00:11:54.527 13:31:54 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:54.527 13:31:54 thread -- scripts/common.sh@355 -- # echo 1 00:11:54.786 13:31:54 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:11:54.786 13:31:54 thread -- scripts/common.sh@366 -- # decimal 2 00:11:54.786 13:31:54 thread -- scripts/common.sh@353 -- # local d=2 00:11:54.786 13:31:54 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:54.786 13:31:54 thread -- scripts/common.sh@355 -- # echo 2 00:11:54.786 13:31:54 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:11:54.786 13:31:54 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:54.786 13:31:54 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:54.786 13:31:54 thread -- scripts/common.sh@368 -- # return 0 00:11:54.786 13:31:54 thread -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:54.786 13:31:54 thread -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:54.786 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:54.786 --rc genhtml_branch_coverage=1 00:11:54.786 --rc genhtml_function_coverage=1 00:11:54.786 --rc genhtml_legend=1 00:11:54.786 --rc geninfo_all_blocks=1 00:11:54.786 --rc geninfo_unexecuted_blocks=1 00:11:54.786 00:11:54.786 ' 00:11:54.786 13:31:54 thread -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:54.786 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:54.786 --rc genhtml_branch_coverage=1 00:11:54.786 --rc genhtml_function_coverage=1 00:11:54.786 --rc genhtml_legend=1 00:11:54.786 --rc geninfo_all_blocks=1 00:11:54.786 --rc geninfo_unexecuted_blocks=1 00:11:54.786 00:11:54.786 ' 00:11:54.786 13:31:54 thread -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:54.786 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:54.786 --rc genhtml_branch_coverage=1 00:11:54.786 --rc genhtml_function_coverage=1 00:11:54.786 --rc genhtml_legend=1 00:11:54.786 --rc geninfo_all_blocks=1 00:11:54.786 --rc geninfo_unexecuted_blocks=1 00:11:54.786 00:11:54.786 ' 00:11:54.786 13:31:54 thread -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:54.786 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:54.786 --rc genhtml_branch_coverage=1 00:11:54.786 --rc genhtml_function_coverage=1 00:11:54.786 --rc genhtml_legend=1 00:11:54.786 --rc geninfo_all_blocks=1 00:11:54.786 --rc geninfo_unexecuted_blocks=1 00:11:54.786 00:11:54.786 ' 00:11:54.786 13:31:54 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:11:54.786 13:31:54 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:11:54.786 13:31:54 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:54.786 13:31:54 thread -- common/autotest_common.sh@10 -- # set +x 00:11:54.786 ************************************ 00:11:54.786 START TEST thread_poller_perf 00:11:54.786 ************************************ 00:11:54.786 13:31:54 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:11:54.786 [2024-11-20 13:31:54.086036] Starting SPDK v25.01-pre git sha1 82b85d9ca / DPDK 24.03.0 initialization... 00:11:54.786 [2024-11-20 13:31:54.086388] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59499 ] 00:11:55.045 [2024-11-20 13:31:54.284039] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:55.045 [2024-11-20 13:31:54.401200] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:55.045 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:11:56.423 [2024-11-20T13:31:55.908Z] ====================================== 00:11:56.423 [2024-11-20T13:31:55.908Z] busy:2501588004 (cyc) 00:11:56.423 [2024-11-20T13:31:55.908Z] total_run_count: 336000 00:11:56.423 [2024-11-20T13:31:55.908Z] tsc_hz: 2490000000 (cyc) 00:11:56.423 [2024-11-20T13:31:55.908Z] ====================================== 00:11:56.423 [2024-11-20T13:31:55.908Z] poller_cost: 7445 (cyc), 2989 (nsec) 00:11:56.423 00:11:56.423 real 0m1.611s 00:11:56.423 user 0m1.388s 00:11:56.423 sys 0m0.104s 00:11:56.423 13:31:55 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:56.423 13:31:55 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:11:56.423 ************************************ 00:11:56.423 END TEST thread_poller_perf 00:11:56.423 ************************************ 00:11:56.423 13:31:55 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:11:56.423 13:31:55 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:11:56.423 13:31:55 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:56.423 13:31:55 thread -- common/autotest_common.sh@10 -- # set +x 00:11:56.423 ************************************ 00:11:56.423 START TEST thread_poller_perf 00:11:56.423 ************************************ 00:11:56.423 13:31:55 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:11:56.423 [2024-11-20 13:31:55.764731] Starting SPDK v25.01-pre git sha1 82b85d9ca / DPDK 24.03.0 initialization... 00:11:56.423 [2024-11-20 13:31:55.764849] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59541 ] 00:11:56.681 [2024-11-20 13:31:55.946425] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:56.681 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:11:56.681 [2024-11-20 13:31:56.068952] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:58.058 [2024-11-20T13:31:57.543Z] ====================================== 00:11:58.058 [2024-11-20T13:31:57.543Z] busy:2493674336 (cyc) 00:11:58.058 [2024-11-20T13:31:57.543Z] total_run_count: 5017000 00:11:58.058 [2024-11-20T13:31:57.543Z] tsc_hz: 2490000000 (cyc) 00:11:58.058 [2024-11-20T13:31:57.543Z] ====================================== 00:11:58.058 [2024-11-20T13:31:57.543Z] poller_cost: 497 (cyc), 199 (nsec) 00:11:58.058 00:11:58.058 real 0m1.591s 00:11:58.058 user 0m1.370s 00:11:58.058 sys 0m0.114s 00:11:58.058 ************************************ 00:11:58.058 END TEST thread_poller_perf 00:11:58.058 ************************************ 00:11:58.058 13:31:57 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:58.058 13:31:57 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:11:58.058 13:31:57 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:11:58.058 00:11:58.058 real 0m3.563s 00:11:58.058 user 0m2.927s 00:11:58.058 sys 0m0.415s 00:11:58.058 ************************************ 00:11:58.058 END TEST thread 00:11:58.058 ************************************ 00:11:58.058 13:31:57 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:58.058 13:31:57 thread -- common/autotest_common.sh@10 -- # set +x 00:11:58.058 13:31:57 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:11:58.058 13:31:57 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:11:58.058 13:31:57 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:58.058 13:31:57 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:58.058 13:31:57 -- common/autotest_common.sh@10 -- # set +x 00:11:58.058 ************************************ 00:11:58.058 START TEST app_cmdline 00:11:58.058 ************************************ 00:11:58.058 13:31:57 app_cmdline -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:11:58.317 * Looking for test storage... 00:11:58.317 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:11:58.317 13:31:57 app_cmdline -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:58.317 13:31:57 app_cmdline -- common/autotest_common.sh@1693 -- # lcov --version 00:11:58.318 13:31:57 app_cmdline -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:58.318 13:31:57 app_cmdline -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:58.318 13:31:57 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:58.318 13:31:57 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:58.318 13:31:57 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:58.318 13:31:57 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:11:58.318 13:31:57 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:11:58.318 13:31:57 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:11:58.318 13:31:57 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:11:58.318 13:31:57 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:11:58.318 13:31:57 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:11:58.318 13:31:57 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:11:58.318 13:31:57 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:58.318 13:31:57 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:11:58.318 13:31:57 app_cmdline -- scripts/common.sh@345 -- # : 1 00:11:58.318 13:31:57 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:58.318 13:31:57 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:58.318 13:31:57 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:11:58.318 13:31:57 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:11:58.318 13:31:57 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:58.318 13:31:57 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:11:58.318 13:31:57 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:11:58.318 13:31:57 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:11:58.318 13:31:57 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:11:58.318 13:31:57 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:58.318 13:31:57 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:11:58.318 13:31:57 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:11:58.318 13:31:57 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:58.318 13:31:57 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:58.318 13:31:57 app_cmdline -- scripts/common.sh@368 -- # return 0 00:11:58.318 13:31:57 app_cmdline -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:58.318 13:31:57 app_cmdline -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:58.318 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:58.318 --rc genhtml_branch_coverage=1 00:11:58.318 --rc genhtml_function_coverage=1 00:11:58.318 --rc genhtml_legend=1 00:11:58.318 --rc geninfo_all_blocks=1 00:11:58.318 --rc geninfo_unexecuted_blocks=1 00:11:58.318 00:11:58.318 ' 00:11:58.318 13:31:57 app_cmdline -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:58.318 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:58.318 --rc genhtml_branch_coverage=1 00:11:58.318 --rc genhtml_function_coverage=1 00:11:58.318 --rc genhtml_legend=1 00:11:58.318 --rc geninfo_all_blocks=1 00:11:58.318 --rc geninfo_unexecuted_blocks=1 00:11:58.318 00:11:58.318 ' 00:11:58.318 13:31:57 app_cmdline -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:58.318 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:58.318 --rc genhtml_branch_coverage=1 00:11:58.318 --rc genhtml_function_coverage=1 00:11:58.318 --rc genhtml_legend=1 00:11:58.318 --rc geninfo_all_blocks=1 00:11:58.318 --rc geninfo_unexecuted_blocks=1 00:11:58.318 00:11:58.318 ' 00:11:58.318 13:31:57 app_cmdline -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:58.318 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:58.318 --rc genhtml_branch_coverage=1 00:11:58.318 --rc genhtml_function_coverage=1 00:11:58.318 --rc genhtml_legend=1 00:11:58.318 --rc geninfo_all_blocks=1 00:11:58.318 --rc geninfo_unexecuted_blocks=1 00:11:58.318 00:11:58.318 ' 00:11:58.318 13:31:57 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:11:58.318 13:31:57 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=59634 00:11:58.318 13:31:57 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:11:58.318 13:31:57 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 59634 00:11:58.318 13:31:57 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 59634 ']' 00:11:58.318 13:31:57 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:58.318 13:31:57 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:58.318 13:31:57 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:58.318 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:58.318 13:31:57 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:58.318 13:31:57 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:11:58.318 [2024-11-20 13:31:57.784322] Starting SPDK v25.01-pre git sha1 82b85d9ca / DPDK 24.03.0 initialization... 00:11:58.318 [2024-11-20 13:31:57.784664] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59634 ] 00:11:58.576 [2024-11-20 13:31:57.967024] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:58.835 [2024-11-20 13:31:58.091864] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:59.773 13:31:58 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:59.773 13:31:58 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:11:59.773 13:31:58 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:11:59.773 { 00:11:59.773 "version": "SPDK v25.01-pre git sha1 82b85d9ca", 00:11:59.773 "fields": { 00:11:59.773 "major": 25, 00:11:59.773 "minor": 1, 00:11:59.773 "patch": 0, 00:11:59.773 "suffix": "-pre", 00:11:59.773 "commit": "82b85d9ca" 00:11:59.773 } 00:11:59.773 } 00:11:59.773 13:31:59 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:11:59.773 13:31:59 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:11:59.773 13:31:59 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:11:59.773 13:31:59 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:11:59.773 13:31:59 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:11:59.773 13:31:59 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:11:59.773 13:31:59 app_cmdline -- app/cmdline.sh@26 -- # sort 00:11:59.773 13:31:59 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.773 13:31:59 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:11:59.773 13:31:59 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.030 13:31:59 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:12:00.030 13:31:59 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:12:00.030 13:31:59 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:12:00.030 13:31:59 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:12:00.030 13:31:59 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:12:00.030 13:31:59 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:00.030 13:31:59 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:00.030 13:31:59 app_cmdline -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:00.030 13:31:59 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:00.030 13:31:59 app_cmdline -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:00.030 13:31:59 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:00.030 13:31:59 app_cmdline -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:00.030 13:31:59 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:12:00.030 13:31:59 app_cmdline -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:12:00.030 request: 00:12:00.030 { 00:12:00.030 "method": "env_dpdk_get_mem_stats", 00:12:00.030 "req_id": 1 00:12:00.030 } 00:12:00.030 Got JSON-RPC error response 00:12:00.030 response: 00:12:00.030 { 00:12:00.030 "code": -32601, 00:12:00.030 "message": "Method not found" 00:12:00.030 } 00:12:00.030 13:31:59 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:12:00.030 13:31:59 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:00.030 13:31:59 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:00.030 13:31:59 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:00.030 13:31:59 app_cmdline -- app/cmdline.sh@1 -- # killprocess 59634 00:12:00.030 13:31:59 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 59634 ']' 00:12:00.030 13:31:59 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 59634 00:12:00.030 13:31:59 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:12:00.030 13:31:59 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:00.030 13:31:59 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59634 00:12:00.289 killing process with pid 59634 00:12:00.289 13:31:59 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:00.289 13:31:59 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:00.289 13:31:59 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59634' 00:12:00.289 13:31:59 app_cmdline -- common/autotest_common.sh@973 -- # kill 59634 00:12:00.289 13:31:59 app_cmdline -- common/autotest_common.sh@978 -- # wait 59634 00:12:02.952 00:12:02.952 real 0m4.545s 00:12:02.952 user 0m4.696s 00:12:02.952 sys 0m0.709s 00:12:02.952 ************************************ 00:12:02.952 END TEST app_cmdline 00:12:02.952 ************************************ 00:12:02.952 13:32:01 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:02.952 13:32:01 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:12:02.953 13:32:02 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:12:02.953 13:32:02 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:02.953 13:32:02 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:02.953 13:32:02 -- common/autotest_common.sh@10 -- # set +x 00:12:02.953 ************************************ 00:12:02.953 START TEST version 00:12:02.953 ************************************ 00:12:02.953 13:32:02 version -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:12:02.953 * Looking for test storage... 00:12:02.953 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:12:02.953 13:32:02 version -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:02.953 13:32:02 version -- common/autotest_common.sh@1693 -- # lcov --version 00:12:02.953 13:32:02 version -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:02.953 13:32:02 version -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:02.953 13:32:02 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:02.953 13:32:02 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:02.953 13:32:02 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:02.953 13:32:02 version -- scripts/common.sh@336 -- # IFS=.-: 00:12:02.953 13:32:02 version -- scripts/common.sh@336 -- # read -ra ver1 00:12:02.953 13:32:02 version -- scripts/common.sh@337 -- # IFS=.-: 00:12:02.953 13:32:02 version -- scripts/common.sh@337 -- # read -ra ver2 00:12:02.953 13:32:02 version -- scripts/common.sh@338 -- # local 'op=<' 00:12:02.953 13:32:02 version -- scripts/common.sh@340 -- # ver1_l=2 00:12:02.953 13:32:02 version -- scripts/common.sh@341 -- # ver2_l=1 00:12:02.953 13:32:02 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:02.953 13:32:02 version -- scripts/common.sh@344 -- # case "$op" in 00:12:02.953 13:32:02 version -- scripts/common.sh@345 -- # : 1 00:12:02.953 13:32:02 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:02.953 13:32:02 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:02.953 13:32:02 version -- scripts/common.sh@365 -- # decimal 1 00:12:02.953 13:32:02 version -- scripts/common.sh@353 -- # local d=1 00:12:02.953 13:32:02 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:02.953 13:32:02 version -- scripts/common.sh@355 -- # echo 1 00:12:02.953 13:32:02 version -- scripts/common.sh@365 -- # ver1[v]=1 00:12:02.953 13:32:02 version -- scripts/common.sh@366 -- # decimal 2 00:12:02.953 13:32:02 version -- scripts/common.sh@353 -- # local d=2 00:12:02.953 13:32:02 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:02.953 13:32:02 version -- scripts/common.sh@355 -- # echo 2 00:12:02.953 13:32:02 version -- scripts/common.sh@366 -- # ver2[v]=2 00:12:02.953 13:32:02 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:02.953 13:32:02 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:02.953 13:32:02 version -- scripts/common.sh@368 -- # return 0 00:12:02.953 13:32:02 version -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:02.953 13:32:02 version -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:02.953 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:02.953 --rc genhtml_branch_coverage=1 00:12:02.953 --rc genhtml_function_coverage=1 00:12:02.953 --rc genhtml_legend=1 00:12:02.953 --rc geninfo_all_blocks=1 00:12:02.953 --rc geninfo_unexecuted_blocks=1 00:12:02.953 00:12:02.953 ' 00:12:02.953 13:32:02 version -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:02.953 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:02.953 --rc genhtml_branch_coverage=1 00:12:02.953 --rc genhtml_function_coverage=1 00:12:02.953 --rc genhtml_legend=1 00:12:02.953 --rc geninfo_all_blocks=1 00:12:02.953 --rc geninfo_unexecuted_blocks=1 00:12:02.953 00:12:02.953 ' 00:12:02.953 13:32:02 version -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:02.953 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:02.953 --rc genhtml_branch_coverage=1 00:12:02.953 --rc genhtml_function_coverage=1 00:12:02.953 --rc genhtml_legend=1 00:12:02.953 --rc geninfo_all_blocks=1 00:12:02.953 --rc geninfo_unexecuted_blocks=1 00:12:02.953 00:12:02.953 ' 00:12:02.953 13:32:02 version -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:02.953 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:02.953 --rc genhtml_branch_coverage=1 00:12:02.953 --rc genhtml_function_coverage=1 00:12:02.953 --rc genhtml_legend=1 00:12:02.953 --rc geninfo_all_blocks=1 00:12:02.953 --rc geninfo_unexecuted_blocks=1 00:12:02.953 00:12:02.953 ' 00:12:02.953 13:32:02 version -- app/version.sh@17 -- # get_header_version major 00:12:02.953 13:32:02 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:12:02.953 13:32:02 version -- app/version.sh@14 -- # cut -f2 00:12:02.953 13:32:02 version -- app/version.sh@14 -- # tr -d '"' 00:12:02.953 13:32:02 version -- app/version.sh@17 -- # major=25 00:12:02.953 13:32:02 version -- app/version.sh@18 -- # get_header_version minor 00:12:02.953 13:32:02 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:12:02.953 13:32:02 version -- app/version.sh@14 -- # cut -f2 00:12:02.953 13:32:02 version -- app/version.sh@14 -- # tr -d '"' 00:12:02.953 13:32:02 version -- app/version.sh@18 -- # minor=1 00:12:02.953 13:32:02 version -- app/version.sh@19 -- # get_header_version patch 00:12:02.953 13:32:02 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:12:02.953 13:32:02 version -- app/version.sh@14 -- # cut -f2 00:12:02.953 13:32:02 version -- app/version.sh@14 -- # tr -d '"' 00:12:02.953 13:32:02 version -- app/version.sh@19 -- # patch=0 00:12:02.953 13:32:02 version -- app/version.sh@20 -- # get_header_version suffix 00:12:02.953 13:32:02 version -- app/version.sh@14 -- # tr -d '"' 00:12:02.953 13:32:02 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:12:02.953 13:32:02 version -- app/version.sh@14 -- # cut -f2 00:12:02.953 13:32:02 version -- app/version.sh@20 -- # suffix=-pre 00:12:02.953 13:32:02 version -- app/version.sh@22 -- # version=25.1 00:12:02.953 13:32:02 version -- app/version.sh@25 -- # (( patch != 0 )) 00:12:02.953 13:32:02 version -- app/version.sh@28 -- # version=25.1rc0 00:12:02.953 13:32:02 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:12:02.953 13:32:02 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:12:02.953 13:32:02 version -- app/version.sh@30 -- # py_version=25.1rc0 00:12:02.953 13:32:02 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:12:02.953 ************************************ 00:12:02.953 END TEST version 00:12:02.953 ************************************ 00:12:02.953 00:12:02.953 real 0m0.330s 00:12:02.953 user 0m0.198s 00:12:02.953 sys 0m0.189s 00:12:02.953 13:32:02 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:02.953 13:32:02 version -- common/autotest_common.sh@10 -- # set +x 00:12:03.213 13:32:02 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:12:03.213 13:32:02 -- spdk/autotest.sh@188 -- # [[ 1 -eq 1 ]] 00:12:03.213 13:32:02 -- spdk/autotest.sh@189 -- # run_test bdev_raid /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:12:03.213 13:32:02 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:03.213 13:32:02 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:03.213 13:32:02 -- common/autotest_common.sh@10 -- # set +x 00:12:03.213 ************************************ 00:12:03.213 START TEST bdev_raid 00:12:03.213 ************************************ 00:12:03.213 13:32:02 bdev_raid -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:12:03.213 * Looking for test storage... 00:12:03.213 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:12:03.213 13:32:02 bdev_raid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:03.213 13:32:02 bdev_raid -- common/autotest_common.sh@1693 -- # lcov --version 00:12:03.213 13:32:02 bdev_raid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:03.213 13:32:02 bdev_raid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:03.213 13:32:02 bdev_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:03.213 13:32:02 bdev_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:03.213 13:32:02 bdev_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:03.213 13:32:02 bdev_raid -- scripts/common.sh@336 -- # IFS=.-: 00:12:03.213 13:32:02 bdev_raid -- scripts/common.sh@336 -- # read -ra ver1 00:12:03.213 13:32:02 bdev_raid -- scripts/common.sh@337 -- # IFS=.-: 00:12:03.213 13:32:02 bdev_raid -- scripts/common.sh@337 -- # read -ra ver2 00:12:03.213 13:32:02 bdev_raid -- scripts/common.sh@338 -- # local 'op=<' 00:12:03.213 13:32:02 bdev_raid -- scripts/common.sh@340 -- # ver1_l=2 00:12:03.213 13:32:02 bdev_raid -- scripts/common.sh@341 -- # ver2_l=1 00:12:03.213 13:32:02 bdev_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:03.213 13:32:02 bdev_raid -- scripts/common.sh@344 -- # case "$op" in 00:12:03.213 13:32:02 bdev_raid -- scripts/common.sh@345 -- # : 1 00:12:03.213 13:32:02 bdev_raid -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:03.213 13:32:02 bdev_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:03.213 13:32:02 bdev_raid -- scripts/common.sh@365 -- # decimal 1 00:12:03.213 13:32:02 bdev_raid -- scripts/common.sh@353 -- # local d=1 00:12:03.213 13:32:02 bdev_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:03.213 13:32:02 bdev_raid -- scripts/common.sh@355 -- # echo 1 00:12:03.213 13:32:02 bdev_raid -- scripts/common.sh@365 -- # ver1[v]=1 00:12:03.213 13:32:02 bdev_raid -- scripts/common.sh@366 -- # decimal 2 00:12:03.213 13:32:02 bdev_raid -- scripts/common.sh@353 -- # local d=2 00:12:03.213 13:32:02 bdev_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:03.213 13:32:02 bdev_raid -- scripts/common.sh@355 -- # echo 2 00:12:03.213 13:32:02 bdev_raid -- scripts/common.sh@366 -- # ver2[v]=2 00:12:03.213 13:32:02 bdev_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:03.213 13:32:02 bdev_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:03.213 13:32:02 bdev_raid -- scripts/common.sh@368 -- # return 0 00:12:03.214 13:32:02 bdev_raid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:03.214 13:32:02 bdev_raid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:03.214 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:03.214 --rc genhtml_branch_coverage=1 00:12:03.214 --rc genhtml_function_coverage=1 00:12:03.214 --rc genhtml_legend=1 00:12:03.214 --rc geninfo_all_blocks=1 00:12:03.214 --rc geninfo_unexecuted_blocks=1 00:12:03.214 00:12:03.214 ' 00:12:03.214 13:32:02 bdev_raid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:03.214 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:03.214 --rc genhtml_branch_coverage=1 00:12:03.214 --rc genhtml_function_coverage=1 00:12:03.214 --rc genhtml_legend=1 00:12:03.214 --rc geninfo_all_blocks=1 00:12:03.214 --rc geninfo_unexecuted_blocks=1 00:12:03.214 00:12:03.214 ' 00:12:03.214 13:32:02 bdev_raid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:03.214 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:03.214 --rc genhtml_branch_coverage=1 00:12:03.214 --rc genhtml_function_coverage=1 00:12:03.214 --rc genhtml_legend=1 00:12:03.214 --rc geninfo_all_blocks=1 00:12:03.214 --rc geninfo_unexecuted_blocks=1 00:12:03.214 00:12:03.214 ' 00:12:03.214 13:32:02 bdev_raid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:03.214 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:03.214 --rc genhtml_branch_coverage=1 00:12:03.214 --rc genhtml_function_coverage=1 00:12:03.214 --rc genhtml_legend=1 00:12:03.214 --rc geninfo_all_blocks=1 00:12:03.214 --rc geninfo_unexecuted_blocks=1 00:12:03.214 00:12:03.214 ' 00:12:03.214 13:32:02 bdev_raid -- bdev/bdev_raid.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:12:03.214 13:32:02 bdev_raid -- bdev/nbd_common.sh@6 -- # set -e 00:12:03.214 13:32:02 bdev_raid -- bdev/bdev_raid.sh@14 -- # rpc_py=rpc_cmd 00:12:03.472 13:32:02 bdev_raid -- bdev/bdev_raid.sh@946 -- # mkdir -p /raidtest 00:12:03.472 13:32:02 bdev_raid -- bdev/bdev_raid.sh@947 -- # trap 'cleanup; exit 1' EXIT 00:12:03.472 13:32:02 bdev_raid -- bdev/bdev_raid.sh@949 -- # base_blocklen=512 00:12:03.472 13:32:02 bdev_raid -- bdev/bdev_raid.sh@951 -- # run_test raid1_resize_data_offset_test raid_resize_data_offset_test 00:12:03.472 13:32:02 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:03.472 13:32:02 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:03.472 13:32:02 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:03.472 ************************************ 00:12:03.472 START TEST raid1_resize_data_offset_test 00:12:03.472 ************************************ 00:12:03.472 13:32:02 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1129 -- # raid_resize_data_offset_test 00:12:03.472 13:32:02 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@917 -- # raid_pid=59829 00:12:03.472 13:32:02 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@916 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:12:03.472 Process raid pid: 59829 00:12:03.472 13:32:02 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@918 -- # echo 'Process raid pid: 59829' 00:12:03.472 13:32:02 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@919 -- # waitforlisten 59829 00:12:03.472 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:03.472 13:32:02 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@835 -- # '[' -z 59829 ']' 00:12:03.472 13:32:02 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:03.472 13:32:02 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:03.472 13:32:02 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:03.472 13:32:02 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:03.472 13:32:02 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.472 [2024-11-20 13:32:02.812676] Starting SPDK v25.01-pre git sha1 82b85d9ca / DPDK 24.03.0 initialization... 00:12:03.472 [2024-11-20 13:32:02.812806] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:03.730 [2024-11-20 13:32:02.994406] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:03.730 [2024-11-20 13:32:03.117709] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:03.989 [2024-11-20 13:32:03.353532] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:03.989 [2024-11-20 13:32:03.353818] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:04.247 13:32:03 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:04.247 13:32:03 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@868 -- # return 0 00:12:04.247 13:32:03 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@922 -- # rpc_cmd bdev_malloc_create -b malloc0 64 512 -o 16 00:12:04.247 13:32:03 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.247 13:32:03 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.247 malloc0 00:12:04.247 13:32:03 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.247 13:32:03 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@923 -- # rpc_cmd bdev_malloc_create -b malloc1 64 512 -o 16 00:12:04.247 13:32:03 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.247 13:32:03 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.507 malloc1 00:12:04.507 13:32:03 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.507 13:32:03 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@924 -- # rpc_cmd bdev_null_create null0 64 512 00:12:04.507 13:32:03 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.507 13:32:03 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.507 null0 00:12:04.507 13:32:03 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.507 13:32:03 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@926 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''malloc0 malloc1 null0'\''' -s 00:12:04.507 13:32:03 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.507 13:32:03 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.507 [2024-11-20 13:32:03.831528] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc0 is claimed 00:12:04.507 [2024-11-20 13:32:03.833658] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:12:04.507 [2024-11-20 13:32:03.834113] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev null0 is claimed 00:12:04.507 [2024-11-20 13:32:03.834329] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:12:04.507 [2024-11-20 13:32:03.834356] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 129024, blocklen 512 00:12:04.507 [2024-11-20 13:32:03.834682] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:12:04.507 [2024-11-20 13:32:03.834881] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:12:04.507 [2024-11-20 13:32:03.834897] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:12:04.507 [2024-11-20 13:32:03.835121] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:04.507 13:32:03 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.507 13:32:03 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:04.507 13:32:03 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # jq -r '.[].base_bdevs_list[2].data_offset' 00:12:04.507 13:32:03 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.507 13:32:03 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.507 13:32:03 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.507 13:32:03 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # (( 2048 == 2048 )) 00:12:04.507 13:32:03 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@931 -- # rpc_cmd bdev_null_delete null0 00:12:04.507 13:32:03 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.507 13:32:03 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.507 [2024-11-20 13:32:03.891466] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: null0 00:12:04.507 13:32:03 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.507 13:32:03 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@935 -- # rpc_cmd bdev_malloc_create -b malloc2 512 512 -o 30 00:12:04.507 13:32:03 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.507 13:32:03 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.075 malloc2 00:12:05.075 13:32:04 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.075 13:32:04 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@936 -- # rpc_cmd bdev_raid_add_base_bdev Raid malloc2 00:12:05.075 13:32:04 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.075 13:32:04 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.075 [2024-11-20 13:32:04.536631] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:12:05.075 [2024-11-20 13:32:04.557457] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:12:05.075 13:32:04 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.333 [2024-11-20 13:32:04.560005] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev Raid 00:12:05.333 13:32:04 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:05.333 13:32:04 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.333 13:32:04 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # jq -r '.[].base_bdevs_list[2].data_offset' 00:12:05.333 13:32:04 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.333 13:32:04 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.333 13:32:04 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # (( 2070 == 2070 )) 00:12:05.333 13:32:04 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@941 -- # killprocess 59829 00:12:05.333 13:32:04 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@954 -- # '[' -z 59829 ']' 00:12:05.333 13:32:04 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@958 -- # kill -0 59829 00:12:05.333 13:32:04 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@959 -- # uname 00:12:05.333 13:32:04 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:05.333 13:32:04 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59829 00:12:05.333 killing process with pid 59829 00:12:05.333 13:32:04 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:05.333 13:32:04 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:05.334 13:32:04 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59829' 00:12:05.334 13:32:04 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@973 -- # kill 59829 00:12:05.334 13:32:04 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@978 -- # wait 59829 00:12:05.334 [2024-11-20 13:32:04.623223] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:05.334 [2024-11-20 13:32:04.623569] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev Raid: Operation canceled 00:12:05.334 [2024-11-20 13:32:04.623645] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:05.334 [2024-11-20 13:32:04.623672] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: malloc2 00:12:05.334 [2024-11-20 13:32:04.660259] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:05.334 [2024-11-20 13:32:04.660666] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:05.334 [2024-11-20 13:32:04.660694] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:12:07.262 [2024-11-20 13:32:06.547257] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:08.636 ************************************ 00:12:08.636 END TEST raid1_resize_data_offset_test 00:12:08.636 ************************************ 00:12:08.636 13:32:07 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@943 -- # return 0 00:12:08.636 00:12:08.636 real 0m5.015s 00:12:08.636 user 0m4.847s 00:12:08.636 sys 0m0.573s 00:12:08.636 13:32:07 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:08.636 13:32:07 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.636 13:32:07 bdev_raid -- bdev/bdev_raid.sh@953 -- # run_test raid0_resize_superblock_test raid_resize_superblock_test 0 00:12:08.636 13:32:07 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:08.636 13:32:07 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:08.636 13:32:07 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:08.636 ************************************ 00:12:08.636 START TEST raid0_resize_superblock_test 00:12:08.636 ************************************ 00:12:08.636 13:32:07 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1129 -- # raid_resize_superblock_test 0 00:12:08.636 13:32:07 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=0 00:12:08.636 Process raid pid: 59907 00:12:08.636 13:32:07 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=59907 00:12:08.636 13:32:07 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 59907' 00:12:08.636 13:32:07 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:12:08.636 13:32:07 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 59907 00:12:08.636 13:32:07 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 59907 ']' 00:12:08.636 13:32:07 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:08.636 13:32:07 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:08.636 13:32:07 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:08.636 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:08.636 13:32:07 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:08.636 13:32:07 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.636 [2024-11-20 13:32:07.874631] Starting SPDK v25.01-pre git sha1 82b85d9ca / DPDK 24.03.0 initialization... 00:12:08.636 [2024-11-20 13:32:07.875994] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:08.636 [2024-11-20 13:32:08.056151] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:08.894 [2024-11-20 13:32:08.212807] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:09.152 [2024-11-20 13:32:08.456582] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:09.152 [2024-11-20 13:32:08.456633] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:09.411 13:32:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:09.411 13:32:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:12:09.411 13:32:08 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 00:12:09.411 13:32:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.411 13:32:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.347 malloc0 00:12:10.347 13:32:09 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.347 13:32:09 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:12:10.347 13:32:09 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.347 13:32:09 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.347 [2024-11-20 13:32:09.474677] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:12:10.347 [2024-11-20 13:32:09.474941] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:10.347 [2024-11-20 13:32:09.475024] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:10.347 [2024-11-20 13:32:09.475206] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:10.347 [2024-11-20 13:32:09.477820] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:10.347 [2024-11-20 13:32:09.477987] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:12:10.347 pt0 00:12:10.347 13:32:09 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.347 13:32:09 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 00:12:10.347 13:32:09 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.347 13:32:09 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.347 92523e9d-5a3b-4585-8407-ce057d48fde3 00:12:10.347 13:32:09 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.347 13:32:09 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 00:12:10.347 13:32:09 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.347 13:32:09 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.347 ab243a7d-1216-44c2-8449-e8e46316b7d7 00:12:10.347 13:32:09 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.347 13:32:09 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 00:12:10.347 13:32:09 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.347 13:32:09 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.347 0bf25b57-9264-4fbf-9bf0-8628100bb309 00:12:10.347 13:32:09 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.347 13:32:09 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 00:12:10.347 13:32:09 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@870 -- # rpc_cmd bdev_raid_create -n Raid -r 0 -z 64 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 00:12:10.347 13:32:09 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.347 13:32:09 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.347 [2024-11-20 13:32:09.589633] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev ab243a7d-1216-44c2-8449-e8e46316b7d7 is claimed 00:12:10.347 [2024-11-20 13:32:09.589730] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 0bf25b57-9264-4fbf-9bf0-8628100bb309 is claimed 00:12:10.347 [2024-11-20 13:32:09.589898] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:12:10.347 [2024-11-20 13:32:09.589930] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 245760, blocklen 512 00:12:10.347 [2024-11-20 13:32:09.590288] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:12:10.347 [2024-11-20 13:32:09.590497] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:12:10.347 [2024-11-20 13:32:09.590509] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:12:10.347 [2024-11-20 13:32:09.590678] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:10.347 13:32:09 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.347 13:32:09 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 00:12:10.347 13:32:09 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:12:10.347 13:32:09 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.347 13:32:09 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.347 13:32:09 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.347 13:32:09 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 00:12:10.347 13:32:09 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:12:10.347 13:32:09 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 00:12:10.347 13:32:09 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.347 13:32:09 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.347 13:32:09 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.347 13:32:09 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 00:12:10.347 13:32:09 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:12:10.347 13:32:09 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # rpc_cmd bdev_get_bdevs -b Raid 00:12:10.347 13:32:09 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.347 13:32:09 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.347 13:32:09 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:12:10.347 13:32:09 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # jq '.[].num_blocks' 00:12:10.347 [2024-11-20 13:32:09.721699] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:10.347 13:32:09 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.347 13:32:09 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:12:10.347 13:32:09 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:12:10.347 13:32:09 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # (( 245760 == 245760 )) 00:12:10.347 13:32:09 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 00:12:10.347 13:32:09 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.347 13:32:09 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.347 [2024-11-20 13:32:09.761616] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:12:10.348 [2024-11-20 13:32:09.761648] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'ab243a7d-1216-44c2-8449-e8e46316b7d7' was resized: old size 131072, new size 204800 00:12:10.348 13:32:09 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.348 13:32:09 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 00:12:10.348 13:32:09 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.348 13:32:09 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.348 [2024-11-20 13:32:09.769516] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:12:10.348 [2024-11-20 13:32:09.769545] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '0bf25b57-9264-4fbf-9bf0-8628100bb309' was resized: old size 131072, new size 204800 00:12:10.348 [2024-11-20 13:32:09.769580] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 245760 to 393216 00:12:10.348 13:32:09 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.348 13:32:09 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:12:10.348 13:32:09 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.348 13:32:09 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:12:10.348 13:32:09 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.348 13:32:09 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.348 13:32:09 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 00:12:10.348 13:32:09 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:12:10.348 13:32:09 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:12:10.348 13:32:09 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.348 13:32:09 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.608 13:32:09 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.608 13:32:09 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 00:12:10.608 13:32:09 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:12:10.608 13:32:09 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # jq '.[].num_blocks' 00:12:10.608 13:32:09 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:12:10.608 13:32:09 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # rpc_cmd bdev_get_bdevs -b Raid 00:12:10.608 13:32:09 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.608 13:32:09 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.608 [2024-11-20 13:32:09.865515] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:10.608 13:32:09 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.608 13:32:09 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:12:10.608 13:32:09 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:12:10.608 13:32:09 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # (( 393216 == 393216 )) 00:12:10.608 13:32:09 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 00:12:10.608 13:32:09 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.608 13:32:09 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.608 [2024-11-20 13:32:09.909229] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:12:10.608 [2024-11-20 13:32:09.909310] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:12:10.608 [2024-11-20 13:32:09.909328] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:10.608 [2024-11-20 13:32:09.909345] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:12:10.608 [2024-11-20 13:32:09.909461] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:10.608 [2024-11-20 13:32:09.909497] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:10.608 [2024-11-20 13:32:09.909512] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:12:10.608 13:32:09 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.608 13:32:09 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:12:10.608 13:32:09 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.608 13:32:09 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.608 [2024-11-20 13:32:09.917100] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:12:10.608 [2024-11-20 13:32:09.917269] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:10.608 [2024-11-20 13:32:09.917297] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:12:10.608 [2024-11-20 13:32:09.917311] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:10.608 [2024-11-20 13:32:09.919821] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:10.608 [2024-11-20 13:32:09.919866] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:12:10.608 [2024-11-20 13:32:09.921575] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev ab243a7d-1216-44c2-8449-e8e46316b7d7 00:12:10.608 [2024-11-20 13:32:09.921644] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev ab243a7d-1216-44c2-8449-e8e46316b7d7 is claimed 00:12:10.608 [2024-11-20 13:32:09.921737] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 0bf25b57-9264-4fbf-9bf0-8628100bb309 00:12:10.608 [2024-11-20 13:32:09.921757] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 0bf25b57-9264-4fbf-9bf0-8628100bb309 is claimed 00:12:10.608 [2024-11-20 13:32:09.921927] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev 0bf25b57-9264-4fbf-9bf0-8628100bb309 (2) smaller than existing raid bdev Raid (3) 00:12:10.608 [2024-11-20 13:32:09.921960] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev ab243a7d-1216-44c2-8449-e8e46316b7d7: File exists 00:12:10.608 [2024-11-20 13:32:09.921998] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:12:10.608 [2024-11-20 13:32:09.922012] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 393216, blocklen 512 00:12:10.608 pt0 00:12:10.608 [2024-11-20 13:32:09.922299] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:12:10.608 [2024-11-20 13:32:09.922466] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:12:10.608 [2024-11-20 13:32:09.922476] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007b00 00:12:10.608 [2024-11-20 13:32:09.922631] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:10.608 13:32:09 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.608 13:32:09 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:12:10.608 13:32:09 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.608 13:32:09 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.608 13:32:09 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.608 13:32:09 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:12:10.608 13:32:09 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # rpc_cmd bdev_get_bdevs -b Raid 00:12:10.608 13:32:09 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.608 13:32:09 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:12:10.608 13:32:09 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # jq '.[].num_blocks' 00:12:10.608 13:32:09 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.608 [2024-11-20 13:32:09.945887] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:10.608 13:32:09 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.608 13:32:09 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:12:10.608 13:32:09 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:12:10.608 13:32:09 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # (( 393216 == 393216 )) 00:12:10.608 13:32:09 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 59907 00:12:10.608 13:32:09 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 59907 ']' 00:12:10.608 13:32:09 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@958 -- # kill -0 59907 00:12:10.608 13:32:09 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@959 -- # uname 00:12:10.608 13:32:09 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:10.608 13:32:09 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59907 00:12:10.608 killing process with pid 59907 00:12:10.608 13:32:10 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:10.608 13:32:10 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:10.608 13:32:10 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59907' 00:12:10.608 13:32:10 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@973 -- # kill 59907 00:12:10.608 13:32:10 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@978 -- # wait 59907 00:12:10.608 [2024-11-20 13:32:10.027946] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:10.608 [2024-11-20 13:32:10.028275] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:10.608 [2024-11-20 13:32:10.028471] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:10.608 [2024-11-20 13:32:10.028495] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Raid, state offline 00:12:12.512 [2024-11-20 13:32:11.499700] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:13.447 13:32:12 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 00:12:13.447 00:12:13.447 real 0m4.869s 00:12:13.447 user 0m5.226s 00:12:13.447 sys 0m0.621s 00:12:13.447 ************************************ 00:12:13.447 END TEST raid0_resize_superblock_test 00:12:13.447 ************************************ 00:12:13.447 13:32:12 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:13.447 13:32:12 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.447 13:32:12 bdev_raid -- bdev/bdev_raid.sh@954 -- # run_test raid1_resize_superblock_test raid_resize_superblock_test 1 00:12:13.447 13:32:12 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:13.447 13:32:12 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:13.447 13:32:12 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:13.447 ************************************ 00:12:13.447 START TEST raid1_resize_superblock_test 00:12:13.447 ************************************ 00:12:13.447 13:32:12 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1129 -- # raid_resize_superblock_test 1 00:12:13.447 13:32:12 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=1 00:12:13.447 13:32:12 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=60011 00:12:13.447 Process raid pid: 60011 00:12:13.447 13:32:12 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:12:13.447 13:32:12 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 60011' 00:12:13.447 13:32:12 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 60011 00:12:13.447 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:13.447 13:32:12 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 60011 ']' 00:12:13.447 13:32:12 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:13.447 13:32:12 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:13.447 13:32:12 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:13.447 13:32:12 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:13.447 13:32:12 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.447 [2024-11-20 13:32:12.831174] Starting SPDK v25.01-pre git sha1 82b85d9ca / DPDK 24.03.0 initialization... 00:12:13.447 [2024-11-20 13:32:12.831308] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:13.706 [2024-11-20 13:32:13.012336] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:13.706 [2024-11-20 13:32:13.130552] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:13.964 [2024-11-20 13:32:13.356699] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:13.964 [2024-11-20 13:32:13.356898] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:14.222 13:32:13 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:14.222 13:32:13 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:12:14.222 13:32:13 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 00:12:14.222 13:32:13 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.222 13:32:13 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.789 malloc0 00:12:14.789 13:32:14 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.789 13:32:14 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:12:14.789 13:32:14 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.789 13:32:14 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.789 [2024-11-20 13:32:14.244132] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:12:14.789 [2024-11-20 13:32:14.244330] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:14.789 [2024-11-20 13:32:14.244362] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:14.789 [2024-11-20 13:32:14.244378] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:14.789 [2024-11-20 13:32:14.246762] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:14.789 [2024-11-20 13:32:14.246809] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:12:14.789 pt0 00:12:14.789 13:32:14 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.789 13:32:14 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 00:12:14.789 13:32:14 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.789 13:32:14 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.049 44d81ced-ff3d-420c-aab9-820f5ea81467 00:12:15.049 13:32:14 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.049 13:32:14 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 00:12:15.049 13:32:14 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.049 13:32:14 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.049 0ca4a9b7-c46b-4399-b89e-d9bed268b7bd 00:12:15.049 13:32:14 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.049 13:32:14 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 00:12:15.049 13:32:14 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.049 13:32:14 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.049 a754b974-411a-4272-bacf-0487bcf71a71 00:12:15.049 13:32:14 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.049 13:32:14 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 00:12:15.049 13:32:14 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@871 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 00:12:15.049 13:32:14 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.049 13:32:14 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.049 [2024-11-20 13:32:14.374508] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 0ca4a9b7-c46b-4399-b89e-d9bed268b7bd is claimed 00:12:15.049 [2024-11-20 13:32:14.374598] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev a754b974-411a-4272-bacf-0487bcf71a71 is claimed 00:12:15.049 [2024-11-20 13:32:14.374734] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:12:15.049 [2024-11-20 13:32:14.374752] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 122880, blocklen 512 00:12:15.049 [2024-11-20 13:32:14.375028] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:12:15.049 [2024-11-20 13:32:14.375243] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:12:15.049 [2024-11-20 13:32:14.375256] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:12:15.049 [2024-11-20 13:32:14.375414] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:15.049 13:32:14 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.049 13:32:14 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:12:15.049 13:32:14 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.049 13:32:14 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 00:12:15.049 13:32:14 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.049 13:32:14 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.049 13:32:14 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 00:12:15.049 13:32:14 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:12:15.049 13:32:14 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.049 13:32:14 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.049 13:32:14 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 00:12:15.049 13:32:14 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.049 13:32:14 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 00:12:15.049 13:32:14 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:12:15.049 13:32:14 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # rpc_cmd bdev_get_bdevs -b Raid 00:12:15.049 13:32:14 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.049 13:32:14 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.049 13:32:14 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:12:15.049 13:32:14 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # jq '.[].num_blocks' 00:12:15.049 [2024-11-20 13:32:14.482687] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:15.049 13:32:14 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.049 13:32:14 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:12:15.049 13:32:14 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:12:15.049 13:32:14 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # (( 122880 == 122880 )) 00:12:15.049 13:32:14 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 00:12:15.049 13:32:14 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.049 13:32:14 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.049 [2024-11-20 13:32:14.522646] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:12:15.049 [2024-11-20 13:32:14.522682] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '0ca4a9b7-c46b-4399-b89e-d9bed268b7bd' was resized: old size 131072, new size 204800 00:12:15.049 13:32:14 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.049 13:32:14 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 00:12:15.049 13:32:14 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.049 13:32:14 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.309 [2024-11-20 13:32:14.534584] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:12:15.309 [2024-11-20 13:32:14.534617] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'a754b974-411a-4272-bacf-0487bcf71a71' was resized: old size 131072, new size 204800 00:12:15.309 [2024-11-20 13:32:14.534657] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 122880 to 196608 00:12:15.309 13:32:14 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.309 13:32:14 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:12:15.309 13:32:14 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:12:15.309 13:32:14 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.309 13:32:14 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.309 13:32:14 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.309 13:32:14 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 00:12:15.309 13:32:14 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:12:15.309 13:32:14 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.309 13:32:14 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.309 13:32:14 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:12:15.309 13:32:14 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.309 13:32:14 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 00:12:15.309 13:32:14 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:12:15.309 13:32:14 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # rpc_cmd bdev_get_bdevs -b Raid 00:12:15.309 13:32:14 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.309 13:32:14 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:12:15.309 13:32:14 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # jq '.[].num_blocks' 00:12:15.309 13:32:14 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.309 [2024-11-20 13:32:14.642503] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:15.309 13:32:14 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.309 13:32:14 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:12:15.309 13:32:14 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:12:15.309 13:32:14 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # (( 196608 == 196608 )) 00:12:15.309 13:32:14 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 00:12:15.309 13:32:14 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.309 13:32:14 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.309 [2024-11-20 13:32:14.690348] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:12:15.309 [2024-11-20 13:32:14.690551] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:12:15.309 [2024-11-20 13:32:14.690715] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:12:15.309 [2024-11-20 13:32:14.690947] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:15.309 [2024-11-20 13:32:14.691254] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:15.309 [2024-11-20 13:32:14.691432] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:15.309 [2024-11-20 13:32:14.691571] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:12:15.309 13:32:14 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.309 13:32:14 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:12:15.309 13:32:14 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.309 13:32:14 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.309 [2024-11-20 13:32:14.702192] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:12:15.309 [2024-11-20 13:32:14.702353] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:15.309 [2024-11-20 13:32:14.702407] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:12:15.309 [2024-11-20 13:32:14.702484] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:15.309 [2024-11-20 13:32:14.704915] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:15.309 [2024-11-20 13:32:14.705067] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:12:15.309 [2024-11-20 13:32:14.706783] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 0ca4a9b7-c46b-4399-b89e-d9bed268b7bd 00:12:15.309 [2024-11-20 13:32:14.706992] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 0ca4a9b7-c46b-4399-b89e-d9bed268b7bd is claimed 00:12:15.309 [2024-11-20 13:32:14.707221] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev a754b974-411a-4272-bacf-0487bcf71a71 00:12:15.309 [2024-11-20 13:32:14.707347] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev a754b974-411a-4272-bacf-0487bcf71a71 pt0 00:12:15.309 is claimed 00:12:15.309 [2024-11-20 13:32:14.707629] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev a754b974-411a-4272-bacf-0487bcf71a71 (2) smaller than existing raid bdev Raid (3) 00:12:15.309 [2024-11-20 13:32:14.707663] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev 0ca4a9b7-c46b-4399-b89e-d9bed268b7bd: File exists 00:12:15.309 [2024-11-20 13:32:14.707705] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:12:15.309 [2024-11-20 13:32:14.707718] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:12:15.309 [2024-11-20 13:32:14.707979] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:12:15.309 [2024-11-20 13:32:14.708154] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:12:15.309 [2024-11-20 13:32:14.708165] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007b00 00:12:15.309 [2024-11-20 13:32:14.708308] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:15.309 13:32:14 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.309 13:32:14 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:12:15.309 13:32:14 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.309 13:32:14 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.309 13:32:14 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.309 13:32:14 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:12:15.309 13:32:14 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # rpc_cmd bdev_get_bdevs -b Raid 00:12:15.309 13:32:14 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:12:15.309 13:32:14 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # jq '.[].num_blocks' 00:12:15.309 13:32:14 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.309 13:32:14 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.309 [2024-11-20 13:32:14.731051] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:15.309 13:32:14 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.309 13:32:14 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:12:15.309 13:32:14 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:12:15.309 13:32:14 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # (( 196608 == 196608 )) 00:12:15.309 13:32:14 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 60011 00:12:15.309 13:32:14 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 60011 ']' 00:12:15.309 13:32:14 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@958 -- # kill -0 60011 00:12:15.310 13:32:14 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@959 -- # uname 00:12:15.310 13:32:14 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:15.310 13:32:14 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60011 00:12:15.568 killing process with pid 60011 00:12:15.568 13:32:14 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:15.568 13:32:14 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:15.568 13:32:14 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60011' 00:12:15.568 13:32:14 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@973 -- # kill 60011 00:12:15.568 [2024-11-20 13:32:14.813453] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:15.568 [2024-11-20 13:32:14.813534] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:15.568 13:32:14 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@978 -- # wait 60011 00:12:15.568 [2024-11-20 13:32:14.813589] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:15.568 [2024-11-20 13:32:14.813601] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Raid, state offline 00:12:16.945 [2024-11-20 13:32:16.267835] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:17.976 13:32:17 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 00:12:17.976 00:12:17.976 real 0m4.675s 00:12:17.976 user 0m4.854s 00:12:17.976 sys 0m0.643s 00:12:17.976 ************************************ 00:12:17.976 END TEST raid1_resize_superblock_test 00:12:17.976 ************************************ 00:12:17.977 13:32:17 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:17.977 13:32:17 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.237 13:32:17 bdev_raid -- bdev/bdev_raid.sh@956 -- # uname -s 00:12:18.237 13:32:17 bdev_raid -- bdev/bdev_raid.sh@956 -- # '[' Linux = Linux ']' 00:12:18.237 13:32:17 bdev_raid -- bdev/bdev_raid.sh@956 -- # modprobe -n nbd 00:12:18.237 13:32:17 bdev_raid -- bdev/bdev_raid.sh@957 -- # has_nbd=true 00:12:18.237 13:32:17 bdev_raid -- bdev/bdev_raid.sh@958 -- # modprobe nbd 00:12:18.237 13:32:17 bdev_raid -- bdev/bdev_raid.sh@959 -- # run_test raid_function_test_raid0 raid_function_test raid0 00:12:18.237 13:32:17 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:18.237 13:32:17 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:18.237 13:32:17 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:18.237 ************************************ 00:12:18.237 START TEST raid_function_test_raid0 00:12:18.237 ************************************ 00:12:18.237 13:32:17 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1129 -- # raid_function_test raid0 00:12:18.237 13:32:17 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@64 -- # local raid_level=raid0 00:12:18.238 13:32:17 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 00:12:18.238 13:32:17 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@66 -- # local raid_bdev 00:12:18.238 Process raid pid: 60118 00:12:18.238 13:32:17 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@69 -- # raid_pid=60118 00:12:18.238 13:32:17 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:12:18.238 13:32:17 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 60118' 00:12:18.238 13:32:17 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@71 -- # waitforlisten 60118 00:12:18.238 13:32:17 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@835 -- # '[' -z 60118 ']' 00:12:18.238 13:32:17 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:18.238 13:32:17 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:18.238 13:32:17 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:18.238 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:18.238 13:32:17 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:18.238 13:32:17 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:12:18.238 [2024-11-20 13:32:17.605616] Starting SPDK v25.01-pre git sha1 82b85d9ca / DPDK 24.03.0 initialization... 00:12:18.238 [2024-11-20 13:32:17.605914] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:18.496 [2024-11-20 13:32:17.785624] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:18.496 [2024-11-20 13:32:17.906880] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:18.756 [2024-11-20 13:32:18.126027] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:18.756 [2024-11-20 13:32:18.126247] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:19.014 13:32:18 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:19.014 13:32:18 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@868 -- # return 0 00:12:19.014 13:32:18 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 00:12:19.014 13:32:18 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.014 13:32:18 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:12:19.014 Base_1 00:12:19.014 13:32:18 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.014 13:32:18 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 00:12:19.014 13:32:18 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.014 13:32:18 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:12:19.274 Base_2 00:12:19.274 13:32:18 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.274 13:32:18 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''Base_1 Base_2'\''' -n raid 00:12:19.274 13:32:18 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.274 13:32:18 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:12:19.274 [2024-11-20 13:32:18.538307] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:12:19.274 [2024-11-20 13:32:18.540322] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:12:19.274 [2024-11-20 13:32:18.540553] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:12:19.274 [2024-11-20 13:32:18.540577] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:12:19.274 [2024-11-20 13:32:18.540844] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:12:19.274 [2024-11-20 13:32:18.540981] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:12:19.274 [2024-11-20 13:32:18.540991] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000007780 00:12:19.274 [2024-11-20 13:32:18.541162] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:19.274 13:32:18 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.274 13:32:18 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 00:12:19.274 13:32:18 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.274 13:32:18 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 00:12:19.274 13:32:18 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:12:19.274 13:32:18 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.274 13:32:18 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 00:12:19.274 13:32:18 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 00:12:19.274 13:32:18 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 00:12:19.274 13:32:18 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:19.274 13:32:18 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:12:19.274 13:32:18 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:19.275 13:32:18 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:12:19.275 13:32:18 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:19.275 13:32:18 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@12 -- # local i 00:12:19.275 13:32:18 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:19.275 13:32:18 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:19.275 13:32:18 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 00:12:19.533 [2024-11-20 13:32:18.794000] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:12:19.533 /dev/nbd0 00:12:19.533 13:32:18 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:19.533 13:32:18 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:19.533 13:32:18 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:12:19.533 13:32:18 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@873 -- # local i 00:12:19.533 13:32:18 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:19.533 13:32:18 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:19.533 13:32:18 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:12:19.533 13:32:18 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@877 -- # break 00:12:19.533 13:32:18 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:19.533 13:32:18 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:19.534 13:32:18 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:19.534 1+0 records in 00:12:19.534 1+0 records out 00:12:19.534 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000296616 s, 13.8 MB/s 00:12:19.534 13:32:18 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:19.534 13:32:18 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@890 -- # size=4096 00:12:19.534 13:32:18 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:19.534 13:32:18 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:19.534 13:32:18 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@893 -- # return 0 00:12:19.534 13:32:18 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:19.534 13:32:18 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:19.534 13:32:18 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 00:12:19.534 13:32:18 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:12:19.534 13:32:18 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:12:19.805 13:32:19 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:12:19.805 { 00:12:19.805 "nbd_device": "/dev/nbd0", 00:12:19.805 "bdev_name": "raid" 00:12:19.805 } 00:12:19.805 ]' 00:12:19.805 13:32:19 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:19.805 13:32:19 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[ 00:12:19.805 { 00:12:19.805 "nbd_device": "/dev/nbd0", 00:12:19.805 "bdev_name": "raid" 00:12:19.805 } 00:12:19.805 ]' 00:12:19.805 13:32:19 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:12:19.805 13:32:19 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:12:19.805 13:32:19 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:19.805 13:32:19 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=1 00:12:19.805 13:32:19 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 1 00:12:19.805 13:32:19 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # count=1 00:12:19.805 13:32:19 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 00:12:19.805 13:32:19 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 00:12:19.805 13:32:19 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:12:19.805 13:32:19 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:12:19.805 13:32:19 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@19 -- # local blksize 00:12:19.805 13:32:19 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 00:12:19.805 13:32:19 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 00:12:19.805 13:32:19 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 00:12:19.805 13:32:19 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # blksize=512 00:12:19.805 13:32:19 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 00:12:19.805 13:32:19 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 00:12:19.805 13:32:19 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 00:12:19.805 13:32:19 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 00:12:19.805 13:32:19 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 00:12:19.805 13:32:19 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 00:12:19.805 13:32:19 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@25 -- # local unmap_off 00:12:19.805 13:32:19 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@26 -- # local unmap_len 00:12:19.806 13:32:19 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:12:19.806 4096+0 records in 00:12:19.806 4096+0 records out 00:12:19.806 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.040472 s, 51.8 MB/s 00:12:19.806 13:32:19 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:12:20.070 4096+0 records in 00:12:20.070 4096+0 records out 00:12:20.070 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.249636 s, 8.4 MB/s 00:12:20.070 13:32:19 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 00:12:20.070 13:32:19 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:12:20.070 13:32:19 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 00:12:20.070 13:32:19 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:12:20.070 13:32:19 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=0 00:12:20.070 13:32:19 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 00:12:20.070 13:32:19 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:12:20.070 128+0 records in 00:12:20.070 128+0 records out 00:12:20.070 65536 bytes (66 kB, 64 KiB) copied, 0.00180935 s, 36.2 MB/s 00:12:20.070 13:32:19 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:12:20.070 13:32:19 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:12:20.070 13:32:19 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:12:20.070 13:32:19 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:12:20.070 13:32:19 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:12:20.070 13:32:19 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 00:12:20.070 13:32:19 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 00:12:20.070 13:32:19 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:12:20.070 2035+0 records in 00:12:20.070 2035+0 records out 00:12:20.070 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.021396 s, 48.7 MB/s 00:12:20.070 13:32:19 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:12:20.329 13:32:19 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:12:20.330 13:32:19 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:12:20.330 13:32:19 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:12:20.330 13:32:19 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:12:20.330 13:32:19 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 00:12:20.330 13:32:19 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 00:12:20.330 13:32:19 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:12:20.330 456+0 records in 00:12:20.330 456+0 records out 00:12:20.330 233472 bytes (233 kB, 228 KiB) copied, 0.00591504 s, 39.5 MB/s 00:12:20.330 13:32:19 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:12:20.330 13:32:19 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:12:20.330 13:32:19 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:12:20.330 13:32:19 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:12:20.330 13:32:19 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:12:20.330 13:32:19 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@52 -- # return 0 00:12:20.330 13:32:19 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:12:20.330 13:32:19 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:20.330 13:32:19 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:12:20.330 13:32:19 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:20.330 13:32:19 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@51 -- # local i 00:12:20.330 13:32:19 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:20.330 13:32:19 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:20.588 13:32:19 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:20.588 [2024-11-20 13:32:19.834561] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:20.588 13:32:19 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:20.588 13:32:19 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:20.588 13:32:19 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:20.588 13:32:19 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:20.588 13:32:19 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:20.588 13:32:19 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@41 -- # break 00:12:20.588 13:32:19 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@45 -- # return 0 00:12:20.588 13:32:19 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 00:12:20.588 13:32:19 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:12:20.589 13:32:19 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:12:20.589 13:32:20 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:12:20.589 13:32:20 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:20.589 13:32:20 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:12:20.846 13:32:20 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:12:20.846 13:32:20 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo '' 00:12:20.846 13:32:20 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:20.846 13:32:20 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # true 00:12:20.846 13:32:20 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=0 00:12:20.846 13:32:20 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 0 00:12:20.846 13:32:20 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # count=0 00:12:20.846 13:32:20 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 00:12:20.846 13:32:20 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@97 -- # killprocess 60118 00:12:20.846 13:32:20 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@954 -- # '[' -z 60118 ']' 00:12:20.846 13:32:20 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@958 -- # kill -0 60118 00:12:20.846 13:32:20 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@959 -- # uname 00:12:20.846 13:32:20 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:20.846 13:32:20 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60118 00:12:20.846 killing process with pid 60118 00:12:20.846 13:32:20 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:20.846 13:32:20 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:20.847 13:32:20 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60118' 00:12:20.847 13:32:20 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@973 -- # kill 60118 00:12:20.847 [2024-11-20 13:32:20.141514] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:20.847 [2024-11-20 13:32:20.141620] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:20.847 13:32:20 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@978 -- # wait 60118 00:12:20.847 [2024-11-20 13:32:20.141668] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:20.847 [2024-11-20 13:32:20.141686] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid, state offline 00:12:21.104 [2024-11-20 13:32:20.349595] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:22.480 ************************************ 00:12:22.480 END TEST raid_function_test_raid0 00:12:22.480 ************************************ 00:12:22.480 13:32:21 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@99 -- # return 0 00:12:22.480 00:12:22.480 real 0m4.030s 00:12:22.480 user 0m4.559s 00:12:22.480 sys 0m1.103s 00:12:22.481 13:32:21 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:22.481 13:32:21 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:12:22.481 13:32:21 bdev_raid -- bdev/bdev_raid.sh@960 -- # run_test raid_function_test_concat raid_function_test concat 00:12:22.481 13:32:21 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:22.481 13:32:21 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:22.481 13:32:21 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:22.481 ************************************ 00:12:22.481 START TEST raid_function_test_concat 00:12:22.481 ************************************ 00:12:22.481 13:32:21 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1129 -- # raid_function_test concat 00:12:22.481 13:32:21 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@64 -- # local raid_level=concat 00:12:22.481 13:32:21 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 00:12:22.481 13:32:21 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@66 -- # local raid_bdev 00:12:22.481 13:32:21 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@69 -- # raid_pid=60243 00:12:22.481 Process raid pid: 60243 00:12:22.481 13:32:21 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 60243' 00:12:22.481 13:32:21 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:12:22.481 13:32:21 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@71 -- # waitforlisten 60243 00:12:22.481 13:32:21 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@835 -- # '[' -z 60243 ']' 00:12:22.481 13:32:21 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:22.481 13:32:21 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:22.481 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:22.481 13:32:21 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:22.481 13:32:21 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:22.481 13:32:21 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:12:22.481 [2024-11-20 13:32:21.696007] Starting SPDK v25.01-pre git sha1 82b85d9ca / DPDK 24.03.0 initialization... 00:12:22.481 [2024-11-20 13:32:21.696142] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:22.481 [2024-11-20 13:32:21.878280] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:22.741 [2024-11-20 13:32:22.014370] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:23.000 [2024-11-20 13:32:22.227026] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:23.000 [2024-11-20 13:32:22.227076] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:23.259 13:32:22 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:23.259 13:32:22 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@868 -- # return 0 00:12:23.259 13:32:22 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 00:12:23.259 13:32:22 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.259 13:32:22 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:12:23.259 Base_1 00:12:23.259 13:32:22 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.259 13:32:22 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 00:12:23.259 13:32:22 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.259 13:32:22 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:12:23.259 Base_2 00:12:23.259 13:32:22 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.259 13:32:22 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''Base_1 Base_2'\''' -n raid 00:12:23.259 13:32:22 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.259 13:32:22 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:12:23.259 [2024-11-20 13:32:22.614147] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:12:23.259 [2024-11-20 13:32:22.616179] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:12:23.259 [2024-11-20 13:32:22.616277] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:12:23.259 [2024-11-20 13:32:22.616295] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:12:23.259 [2024-11-20 13:32:22.616568] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:12:23.259 [2024-11-20 13:32:22.616741] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:12:23.259 [2024-11-20 13:32:22.616752] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000007780 00:12:23.259 [2024-11-20 13:32:22.616909] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:23.259 13:32:22 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.259 13:32:22 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 00:12:23.259 13:32:22 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.259 13:32:22 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 00:12:23.259 13:32:22 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:12:23.259 13:32:22 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.259 13:32:22 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 00:12:23.259 13:32:22 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 00:12:23.259 13:32:22 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 00:12:23.259 13:32:22 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:23.259 13:32:22 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:12:23.259 13:32:22 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:23.259 13:32:22 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:12:23.259 13:32:22 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:23.259 13:32:22 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@12 -- # local i 00:12:23.259 13:32:22 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:23.259 13:32:22 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:23.259 13:32:22 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 00:12:23.519 [2024-11-20 13:32:22.861829] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:12:23.519 /dev/nbd0 00:12:23.519 13:32:22 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:23.519 13:32:22 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:23.519 13:32:22 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:12:23.519 13:32:22 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@873 -- # local i 00:12:23.519 13:32:22 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:23.519 13:32:22 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:23.519 13:32:22 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:12:23.519 13:32:22 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@877 -- # break 00:12:23.519 13:32:22 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:23.519 13:32:22 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:23.519 13:32:22 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:23.519 1+0 records in 00:12:23.519 1+0 records out 00:12:23.519 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000352271 s, 11.6 MB/s 00:12:23.519 13:32:22 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:23.519 13:32:22 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@890 -- # size=4096 00:12:23.519 13:32:22 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:23.519 13:32:22 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:23.519 13:32:22 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@893 -- # return 0 00:12:23.519 13:32:22 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:23.519 13:32:22 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:23.519 13:32:22 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 00:12:23.519 13:32:22 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:12:23.519 13:32:22 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:12:23.778 13:32:23 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:12:23.778 { 00:12:23.778 "nbd_device": "/dev/nbd0", 00:12:23.778 "bdev_name": "raid" 00:12:23.779 } 00:12:23.779 ]' 00:12:23.779 13:32:23 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[ 00:12:23.779 { 00:12:23.779 "nbd_device": "/dev/nbd0", 00:12:23.779 "bdev_name": "raid" 00:12:23.779 } 00:12:23.779 ]' 00:12:23.779 13:32:23 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:23.779 13:32:23 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:12:23.779 13:32:23 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:23.779 13:32:23 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:12:23.779 13:32:23 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=1 00:12:23.779 13:32:23 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 1 00:12:23.779 13:32:23 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # count=1 00:12:23.779 13:32:23 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 00:12:23.779 13:32:23 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 00:12:23.779 13:32:23 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:12:23.779 13:32:23 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:12:23.779 13:32:23 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@19 -- # local blksize 00:12:23.779 13:32:23 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 00:12:23.779 13:32:23 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 00:12:23.779 13:32:23 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 00:12:23.779 13:32:23 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # blksize=512 00:12:23.779 13:32:23 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 00:12:23.779 13:32:23 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 00:12:23.779 13:32:23 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 00:12:23.779 13:32:23 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 00:12:23.779 13:32:23 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 00:12:23.779 13:32:23 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 00:12:23.779 13:32:23 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@25 -- # local unmap_off 00:12:23.779 13:32:23 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@26 -- # local unmap_len 00:12:23.779 13:32:23 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:12:23.779 4096+0 records in 00:12:23.779 4096+0 records out 00:12:23.779 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0348235 s, 60.2 MB/s 00:12:23.779 13:32:23 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:12:24.036 4096+0 records in 00:12:24.036 4096+0 records out 00:12:24.036 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.260219 s, 8.1 MB/s 00:12:24.036 13:32:23 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 00:12:24.036 13:32:23 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:12:24.294 13:32:23 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 00:12:24.294 13:32:23 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:12:24.294 13:32:23 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=0 00:12:24.294 13:32:23 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 00:12:24.294 13:32:23 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:12:24.294 128+0 records in 00:12:24.295 128+0 records out 00:12:24.295 65536 bytes (66 kB, 64 KiB) copied, 0.00172193 s, 38.1 MB/s 00:12:24.295 13:32:23 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:12:24.295 13:32:23 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:12:24.295 13:32:23 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:12:24.295 13:32:23 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:12:24.295 13:32:23 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:12:24.295 13:32:23 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 00:12:24.295 13:32:23 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 00:12:24.295 13:32:23 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:12:24.295 2035+0 records in 00:12:24.295 2035+0 records out 00:12:24.295 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.0176345 s, 59.1 MB/s 00:12:24.295 13:32:23 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:12:24.295 13:32:23 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:12:24.295 13:32:23 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:12:24.295 13:32:23 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:12:24.295 13:32:23 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:12:24.295 13:32:23 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 00:12:24.295 13:32:23 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 00:12:24.295 13:32:23 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:12:24.295 456+0 records in 00:12:24.295 456+0 records out 00:12:24.295 233472 bytes (233 kB, 228 KiB) copied, 0.00594971 s, 39.2 MB/s 00:12:24.295 13:32:23 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:12:24.295 13:32:23 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:12:24.295 13:32:23 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:12:24.295 13:32:23 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:12:24.295 13:32:23 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:12:24.295 13:32:23 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@52 -- # return 0 00:12:24.295 13:32:23 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:12:24.295 13:32:23 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:24.295 13:32:23 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:12:24.295 13:32:23 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:24.295 13:32:23 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@51 -- # local i 00:12:24.295 13:32:23 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:24.295 13:32:23 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:24.553 13:32:23 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:24.553 [2024-11-20 13:32:23.863581] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:24.553 13:32:23 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:24.553 13:32:23 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:24.553 13:32:23 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:24.553 13:32:23 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:24.553 13:32:23 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:24.553 13:32:23 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@41 -- # break 00:12:24.553 13:32:23 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@45 -- # return 0 00:12:24.554 13:32:23 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 00:12:24.554 13:32:23 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:12:24.554 13:32:23 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:12:24.812 13:32:24 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:12:24.812 13:32:24 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:24.812 13:32:24 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:12:24.812 13:32:24 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:12:24.812 13:32:24 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo '' 00:12:24.812 13:32:24 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:24.812 13:32:24 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # true 00:12:24.812 13:32:24 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=0 00:12:24.812 13:32:24 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 0 00:12:24.812 13:32:24 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # count=0 00:12:24.812 13:32:24 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 00:12:24.812 13:32:24 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@97 -- # killprocess 60243 00:12:24.812 13:32:24 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@954 -- # '[' -z 60243 ']' 00:12:24.812 13:32:24 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@958 -- # kill -0 60243 00:12:24.812 13:32:24 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@959 -- # uname 00:12:24.812 13:32:24 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:24.812 13:32:24 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60243 00:12:24.812 13:32:24 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:24.812 killing process with pid 60243 00:12:24.812 13:32:24 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:24.812 13:32:24 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60243' 00:12:24.812 13:32:24 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@973 -- # kill 60243 00:12:24.812 [2024-11-20 13:32:24.223525] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:24.812 [2024-11-20 13:32:24.223641] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:24.812 13:32:24 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@978 -- # wait 60243 00:12:24.812 [2024-11-20 13:32:24.223696] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:24.812 [2024-11-20 13:32:24.223711] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid, state offline 00:12:25.071 [2024-11-20 13:32:24.432843] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:26.448 13:32:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@99 -- # return 0 00:12:26.448 00:12:26.448 real 0m3.970s 00:12:26.448 user 0m4.520s 00:12:26.448 sys 0m1.085s 00:12:26.448 13:32:25 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:26.448 ************************************ 00:12:26.448 END TEST raid_function_test_concat 00:12:26.448 13:32:25 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:12:26.448 ************************************ 00:12:26.448 13:32:25 bdev_raid -- bdev/bdev_raid.sh@963 -- # run_test raid0_resize_test raid_resize_test 0 00:12:26.448 13:32:25 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:26.448 13:32:25 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:26.448 13:32:25 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:26.448 ************************************ 00:12:26.448 START TEST raid0_resize_test 00:12:26.448 ************************************ 00:12:26.448 13:32:25 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1129 -- # raid_resize_test 0 00:12:26.448 13:32:25 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=0 00:12:26.448 13:32:25 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 00:12:26.448 13:32:25 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 00:12:26.448 13:32:25 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 00:12:26.448 13:32:25 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 00:12:26.448 13:32:25 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 00:12:26.448 13:32:25 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 00:12:26.448 13:32:25 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 00:12:26.448 13:32:25 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=60372 00:12:26.448 13:32:25 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 60372' 00:12:26.448 Process raid pid: 60372 00:12:26.448 13:32:25 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:12:26.448 13:32:25 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 60372 00:12:26.448 13:32:25 bdev_raid.raid0_resize_test -- common/autotest_common.sh@835 -- # '[' -z 60372 ']' 00:12:26.448 13:32:25 bdev_raid.raid0_resize_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:26.448 13:32:25 bdev_raid.raid0_resize_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:26.448 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:26.448 13:32:25 bdev_raid.raid0_resize_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:26.448 13:32:25 bdev_raid.raid0_resize_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:26.448 13:32:25 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.448 [2024-11-20 13:32:25.767288] Starting SPDK v25.01-pre git sha1 82b85d9ca / DPDK 24.03.0 initialization... 00:12:26.448 [2024-11-20 13:32:25.767473] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:26.707 [2024-11-20 13:32:25.964404] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:26.707 [2024-11-20 13:32:26.088253] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:26.965 [2024-11-20 13:32:26.317334] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:26.965 [2024-11-20 13:32:26.317398] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:27.224 13:32:26 bdev_raid.raid0_resize_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:27.224 13:32:26 bdev_raid.raid0_resize_test -- common/autotest_common.sh@868 -- # return 0 00:12:27.224 13:32:26 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 00:12:27.224 13:32:26 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.224 13:32:26 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.224 Base_1 00:12:27.224 13:32:26 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.224 13:32:26 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 00:12:27.224 13:32:26 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.224 13:32:26 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.224 Base_2 00:12:27.224 13:32:26 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.224 13:32:26 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 0 -eq 0 ']' 00:12:27.224 13:32:26 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@350 -- # rpc_cmd bdev_raid_create -z 64 -r 0 -b ''\''Base_1 Base_2'\''' -n Raid 00:12:27.224 13:32:26 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.224 13:32:26 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.224 [2024-11-20 13:32:26.697478] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:12:27.224 [2024-11-20 13:32:26.699508] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:12:27.224 [2024-11-20 13:32:26.699571] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:12:27.224 [2024-11-20 13:32:26.699585] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:12:27.224 [2024-11-20 13:32:26.699846] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:12:27.224 [2024-11-20 13:32:26.699965] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:12:27.224 [2024-11-20 13:32:26.699976] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:12:27.224 [2024-11-20 13:32:26.700124] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:27.224 13:32:26 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.224 13:32:26 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 00:12:27.224 13:32:26 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.224 13:32:26 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.224 [2024-11-20 13:32:26.705444] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:12:27.224 [2024-11-20 13:32:26.705477] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:12:27.484 true 00:12:27.484 13:32:26 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.484 13:32:26 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 00:12:27.484 13:32:26 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.484 13:32:26 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 00:12:27.484 13:32:26 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.484 [2024-11-20 13:32:26.717592] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:27.484 13:32:26 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.484 13:32:26 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=131072 00:12:27.484 13:32:26 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=64 00:12:27.484 13:32:26 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 0 -eq 0 ']' 00:12:27.484 13:32:26 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@362 -- # expected_size=64 00:12:27.484 13:32:26 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 64 '!=' 64 ']' 00:12:27.484 13:32:26 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 00:12:27.484 13:32:26 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.484 13:32:26 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.484 [2024-11-20 13:32:26.765366] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:12:27.484 [2024-11-20 13:32:26.765398] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:12:27.484 [2024-11-20 13:32:26.765433] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 131072 to 262144 00:12:27.484 true 00:12:27.484 13:32:26 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.484 13:32:26 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 00:12:27.484 13:32:26 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 00:12:27.484 13:32:26 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.484 13:32:26 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.484 [2024-11-20 13:32:26.777522] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:27.484 13:32:26 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.484 13:32:26 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=262144 00:12:27.484 13:32:26 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=128 00:12:27.484 13:32:26 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 0 -eq 0 ']' 00:12:27.484 13:32:26 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@378 -- # expected_size=128 00:12:27.484 13:32:26 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 128 '!=' 128 ']' 00:12:27.484 13:32:26 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 60372 00:12:27.484 13:32:26 bdev_raid.raid0_resize_test -- common/autotest_common.sh@954 -- # '[' -z 60372 ']' 00:12:27.484 13:32:26 bdev_raid.raid0_resize_test -- common/autotest_common.sh@958 -- # kill -0 60372 00:12:27.484 13:32:26 bdev_raid.raid0_resize_test -- common/autotest_common.sh@959 -- # uname 00:12:27.484 13:32:26 bdev_raid.raid0_resize_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:27.484 13:32:26 bdev_raid.raid0_resize_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60372 00:12:27.484 13:32:26 bdev_raid.raid0_resize_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:27.484 13:32:26 bdev_raid.raid0_resize_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:27.484 killing process with pid 60372 00:12:27.484 13:32:26 bdev_raid.raid0_resize_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60372' 00:12:27.484 13:32:26 bdev_raid.raid0_resize_test -- common/autotest_common.sh@973 -- # kill 60372 00:12:27.484 [2024-11-20 13:32:26.862827] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:27.485 [2024-11-20 13:32:26.862920] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:27.485 13:32:26 bdev_raid.raid0_resize_test -- common/autotest_common.sh@978 -- # wait 60372 00:12:27.485 [2024-11-20 13:32:26.862970] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:27.485 [2024-11-20 13:32:26.862981] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:12:27.485 [2024-11-20 13:32:26.881318] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:28.863 13:32:28 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:12:28.863 00:12:28.863 real 0m2.374s 00:12:28.863 user 0m2.551s 00:12:28.863 sys 0m0.421s 00:12:28.863 13:32:28 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:28.863 13:32:28 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.863 ************************************ 00:12:28.863 END TEST raid0_resize_test 00:12:28.863 ************************************ 00:12:28.863 13:32:28 bdev_raid -- bdev/bdev_raid.sh@964 -- # run_test raid1_resize_test raid_resize_test 1 00:12:28.863 13:32:28 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:28.863 13:32:28 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:28.863 13:32:28 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:28.863 ************************************ 00:12:28.863 START TEST raid1_resize_test 00:12:28.863 ************************************ 00:12:28.863 13:32:28 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1129 -- # raid_resize_test 1 00:12:28.863 13:32:28 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=1 00:12:28.863 13:32:28 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 00:12:28.863 13:32:28 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 00:12:28.863 13:32:28 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 00:12:28.863 13:32:28 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 00:12:28.863 13:32:28 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 00:12:28.863 13:32:28 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 00:12:28.863 13:32:28 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 00:12:28.863 13:32:28 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=60428 00:12:28.863 13:32:28 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 60428' 00:12:28.863 Process raid pid: 60428 00:12:28.863 13:32:28 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:12:28.863 13:32:28 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 60428 00:12:28.863 13:32:28 bdev_raid.raid1_resize_test -- common/autotest_common.sh@835 -- # '[' -z 60428 ']' 00:12:28.863 13:32:28 bdev_raid.raid1_resize_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:28.863 13:32:28 bdev_raid.raid1_resize_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:28.863 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:28.863 13:32:28 bdev_raid.raid1_resize_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:28.863 13:32:28 bdev_raid.raid1_resize_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:28.863 13:32:28 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.863 [2024-11-20 13:32:28.186182] Starting SPDK v25.01-pre git sha1 82b85d9ca / DPDK 24.03.0 initialization... 00:12:28.863 [2024-11-20 13:32:28.186679] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:29.122 [2024-11-20 13:32:28.352298] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:29.122 [2024-11-20 13:32:28.464037] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:29.381 [2024-11-20 13:32:28.658873] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:29.381 [2024-11-20 13:32:28.658923] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:29.639 13:32:29 bdev_raid.raid1_resize_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:29.639 13:32:29 bdev_raid.raid1_resize_test -- common/autotest_common.sh@868 -- # return 0 00:12:29.639 13:32:29 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 00:12:29.639 13:32:29 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.639 13:32:29 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.639 Base_1 00:12:29.639 13:32:29 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.639 13:32:29 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 00:12:29.639 13:32:29 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.639 13:32:29 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.639 Base_2 00:12:29.639 13:32:29 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.639 13:32:29 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 1 -eq 0 ']' 00:12:29.639 13:32:29 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@352 -- # rpc_cmd bdev_raid_create -r 1 -b ''\''Base_1 Base_2'\''' -n Raid 00:12:29.639 13:32:29 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.639 13:32:29 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.639 [2024-11-20 13:32:29.058245] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:12:29.639 [2024-11-20 13:32:29.060319] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:12:29.639 [2024-11-20 13:32:29.060394] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:12:29.639 [2024-11-20 13:32:29.060409] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:12:29.639 [2024-11-20 13:32:29.060695] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:12:29.639 [2024-11-20 13:32:29.060844] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:12:29.639 [2024-11-20 13:32:29.060862] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:12:29.639 [2024-11-20 13:32:29.061027] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:29.639 13:32:29 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.639 13:32:29 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 00:12:29.639 13:32:29 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.639 13:32:29 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.639 [2024-11-20 13:32:29.066200] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:12:29.639 [2024-11-20 13:32:29.066235] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:12:29.639 true 00:12:29.639 13:32:29 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.639 13:32:29 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 00:12:29.639 13:32:29 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.639 13:32:29 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.639 13:32:29 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 00:12:29.639 [2024-11-20 13:32:29.078370] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:29.639 13:32:29 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.639 13:32:29 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=65536 00:12:29.639 13:32:29 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=32 00:12:29.639 13:32:29 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 1 -eq 0 ']' 00:12:29.639 13:32:29 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@364 -- # expected_size=32 00:12:29.639 13:32:29 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 32 '!=' 32 ']' 00:12:29.639 13:32:29 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 00:12:29.639 13:32:29 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.639 13:32:29 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.639 [2024-11-20 13:32:29.122200] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:12:29.639 [2024-11-20 13:32:29.122231] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:12:29.639 [2024-11-20 13:32:29.122262] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 65536 to 131072 00:12:29.910 true 00:12:29.910 13:32:29 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.910 13:32:29 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 00:12:29.910 13:32:29 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.910 13:32:29 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.910 13:32:29 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 00:12:29.910 [2024-11-20 13:32:29.134375] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:29.910 13:32:29 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.910 13:32:29 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=131072 00:12:29.910 13:32:29 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=64 00:12:29.910 13:32:29 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 1 -eq 0 ']' 00:12:29.910 13:32:29 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@380 -- # expected_size=64 00:12:29.910 13:32:29 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 64 '!=' 64 ']' 00:12:29.910 13:32:29 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 60428 00:12:29.910 13:32:29 bdev_raid.raid1_resize_test -- common/autotest_common.sh@954 -- # '[' -z 60428 ']' 00:12:29.910 13:32:29 bdev_raid.raid1_resize_test -- common/autotest_common.sh@958 -- # kill -0 60428 00:12:29.910 13:32:29 bdev_raid.raid1_resize_test -- common/autotest_common.sh@959 -- # uname 00:12:29.910 13:32:29 bdev_raid.raid1_resize_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:29.910 13:32:29 bdev_raid.raid1_resize_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60428 00:12:29.910 13:32:29 bdev_raid.raid1_resize_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:29.910 13:32:29 bdev_raid.raid1_resize_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:29.910 killing process with pid 60428 00:12:29.910 13:32:29 bdev_raid.raid1_resize_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60428' 00:12:29.910 13:32:29 bdev_raid.raid1_resize_test -- common/autotest_common.sh@973 -- # kill 60428 00:12:29.910 [2024-11-20 13:32:29.211971] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:29.911 [2024-11-20 13:32:29.212072] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:29.911 13:32:29 bdev_raid.raid1_resize_test -- common/autotest_common.sh@978 -- # wait 60428 00:12:29.911 [2024-11-20 13:32:29.212542] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:29.911 [2024-11-20 13:32:29.212571] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:12:29.911 [2024-11-20 13:32:29.229940] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:31.293 13:32:30 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:12:31.293 00:12:31.293 real 0m2.273s 00:12:31.293 user 0m2.380s 00:12:31.293 sys 0m0.388s 00:12:31.293 13:32:30 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:31.293 13:32:30 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.293 ************************************ 00:12:31.293 END TEST raid1_resize_test 00:12:31.293 ************************************ 00:12:31.293 13:32:30 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:12:31.293 13:32:30 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:12:31.293 13:32:30 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 2 false 00:12:31.293 13:32:30 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:12:31.293 13:32:30 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:31.293 13:32:30 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:31.293 ************************************ 00:12:31.293 START TEST raid_state_function_test 00:12:31.293 ************************************ 00:12:31.293 13:32:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 2 false 00:12:31.293 13:32:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:12:31.293 13:32:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:12:31.293 13:32:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:12:31.293 13:32:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:12:31.293 13:32:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:12:31.293 13:32:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:31.293 13:32:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:12:31.293 13:32:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:31.293 13:32:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:31.293 13:32:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:12:31.293 13:32:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:31.293 13:32:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:31.293 13:32:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:12:31.293 13:32:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:12:31.293 13:32:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:12:31.293 13:32:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:12:31.293 13:32:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:12:31.293 13:32:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:12:31.293 13:32:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:12:31.293 13:32:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:12:31.293 13:32:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:12:31.293 13:32:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:12:31.293 13:32:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:12:31.293 13:32:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=60491 00:12:31.293 13:32:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:12:31.293 Process raid pid: 60491 00:12:31.293 13:32:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 60491' 00:12:31.293 13:32:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 60491 00:12:31.293 13:32:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 60491 ']' 00:12:31.293 13:32:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:31.293 13:32:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:31.293 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:31.293 13:32:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:31.293 13:32:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:31.293 13:32:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.293 [2024-11-20 13:32:30.541426] Starting SPDK v25.01-pre git sha1 82b85d9ca / DPDK 24.03.0 initialization... 00:12:31.293 [2024-11-20 13:32:30.541557] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:31.293 [2024-11-20 13:32:30.705520] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:31.551 [2024-11-20 13:32:30.829006] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:31.809 [2024-11-20 13:32:31.053025] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:31.809 [2024-11-20 13:32:31.053082] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:32.068 13:32:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:32.068 13:32:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:12:32.068 13:32:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:12:32.068 13:32:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.068 13:32:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.068 [2024-11-20 13:32:31.400180] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:32.068 [2024-11-20 13:32:31.400237] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:32.068 [2024-11-20 13:32:31.400251] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:32.068 [2024-11-20 13:32:31.400267] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:32.068 13:32:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.068 13:32:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:12:32.068 13:32:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:32.068 13:32:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:32.068 13:32:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:32.068 13:32:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:32.068 13:32:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:32.068 13:32:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:32.068 13:32:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:32.068 13:32:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:32.068 13:32:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:32.068 13:32:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:32.068 13:32:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.068 13:32:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.068 13:32:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:32.068 13:32:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.068 13:32:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:32.068 "name": "Existed_Raid", 00:12:32.068 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:32.068 "strip_size_kb": 64, 00:12:32.068 "state": "configuring", 00:12:32.068 "raid_level": "raid0", 00:12:32.068 "superblock": false, 00:12:32.068 "num_base_bdevs": 2, 00:12:32.068 "num_base_bdevs_discovered": 0, 00:12:32.068 "num_base_bdevs_operational": 2, 00:12:32.068 "base_bdevs_list": [ 00:12:32.068 { 00:12:32.068 "name": "BaseBdev1", 00:12:32.068 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:32.068 "is_configured": false, 00:12:32.068 "data_offset": 0, 00:12:32.068 "data_size": 0 00:12:32.068 }, 00:12:32.068 { 00:12:32.068 "name": "BaseBdev2", 00:12:32.068 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:32.068 "is_configured": false, 00:12:32.068 "data_offset": 0, 00:12:32.068 "data_size": 0 00:12:32.068 } 00:12:32.068 ] 00:12:32.068 }' 00:12:32.068 13:32:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:32.068 13:32:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.326 13:32:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:32.326 13:32:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.326 13:32:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.326 [2024-11-20 13:32:31.795565] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:32.326 [2024-11-20 13:32:31.795610] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:12:32.326 13:32:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.326 13:32:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:12:32.326 13:32:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.326 13:32:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.326 [2024-11-20 13:32:31.803529] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:32.326 [2024-11-20 13:32:31.803586] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:32.326 [2024-11-20 13:32:31.803597] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:32.326 [2024-11-20 13:32:31.803613] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:32.326 13:32:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.326 13:32:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:32.326 13:32:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.326 13:32:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.650 [2024-11-20 13:32:31.849743] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:32.650 BaseBdev1 00:12:32.650 13:32:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.650 13:32:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:12:32.650 13:32:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:12:32.650 13:32:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:32.650 13:32:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:32.650 13:32:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:32.650 13:32:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:32.650 13:32:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:32.650 13:32:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.650 13:32:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.650 13:32:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.650 13:32:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:32.650 13:32:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.650 13:32:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.650 [ 00:12:32.650 { 00:12:32.650 "name": "BaseBdev1", 00:12:32.650 "aliases": [ 00:12:32.650 "accc471b-b200-4aba-9180-717136463e20" 00:12:32.650 ], 00:12:32.650 "product_name": "Malloc disk", 00:12:32.650 "block_size": 512, 00:12:32.650 "num_blocks": 65536, 00:12:32.650 "uuid": "accc471b-b200-4aba-9180-717136463e20", 00:12:32.650 "assigned_rate_limits": { 00:12:32.650 "rw_ios_per_sec": 0, 00:12:32.650 "rw_mbytes_per_sec": 0, 00:12:32.650 "r_mbytes_per_sec": 0, 00:12:32.650 "w_mbytes_per_sec": 0 00:12:32.650 }, 00:12:32.650 "claimed": true, 00:12:32.650 "claim_type": "exclusive_write", 00:12:32.650 "zoned": false, 00:12:32.650 "supported_io_types": { 00:12:32.650 "read": true, 00:12:32.650 "write": true, 00:12:32.650 "unmap": true, 00:12:32.650 "flush": true, 00:12:32.650 "reset": true, 00:12:32.650 "nvme_admin": false, 00:12:32.650 "nvme_io": false, 00:12:32.650 "nvme_io_md": false, 00:12:32.650 "write_zeroes": true, 00:12:32.650 "zcopy": true, 00:12:32.650 "get_zone_info": false, 00:12:32.650 "zone_management": false, 00:12:32.650 "zone_append": false, 00:12:32.650 "compare": false, 00:12:32.650 "compare_and_write": false, 00:12:32.650 "abort": true, 00:12:32.650 "seek_hole": false, 00:12:32.650 "seek_data": false, 00:12:32.650 "copy": true, 00:12:32.650 "nvme_iov_md": false 00:12:32.650 }, 00:12:32.650 "memory_domains": [ 00:12:32.650 { 00:12:32.650 "dma_device_id": "system", 00:12:32.650 "dma_device_type": 1 00:12:32.650 }, 00:12:32.650 { 00:12:32.650 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:32.650 "dma_device_type": 2 00:12:32.650 } 00:12:32.650 ], 00:12:32.650 "driver_specific": {} 00:12:32.650 } 00:12:32.650 ] 00:12:32.650 13:32:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.650 13:32:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:32.650 13:32:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:12:32.650 13:32:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:32.650 13:32:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:32.650 13:32:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:32.650 13:32:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:32.650 13:32:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:32.650 13:32:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:32.650 13:32:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:32.650 13:32:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:32.650 13:32:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:32.650 13:32:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:32.650 13:32:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:32.650 13:32:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.650 13:32:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.650 13:32:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.650 13:32:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:32.650 "name": "Existed_Raid", 00:12:32.650 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:32.650 "strip_size_kb": 64, 00:12:32.650 "state": "configuring", 00:12:32.650 "raid_level": "raid0", 00:12:32.650 "superblock": false, 00:12:32.650 "num_base_bdevs": 2, 00:12:32.650 "num_base_bdevs_discovered": 1, 00:12:32.650 "num_base_bdevs_operational": 2, 00:12:32.650 "base_bdevs_list": [ 00:12:32.650 { 00:12:32.650 "name": "BaseBdev1", 00:12:32.650 "uuid": "accc471b-b200-4aba-9180-717136463e20", 00:12:32.650 "is_configured": true, 00:12:32.650 "data_offset": 0, 00:12:32.650 "data_size": 65536 00:12:32.650 }, 00:12:32.650 { 00:12:32.650 "name": "BaseBdev2", 00:12:32.650 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:32.650 "is_configured": false, 00:12:32.650 "data_offset": 0, 00:12:32.651 "data_size": 0 00:12:32.651 } 00:12:32.651 ] 00:12:32.651 }' 00:12:32.651 13:32:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:32.651 13:32:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.923 13:32:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:32.923 13:32:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.923 13:32:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.923 [2024-11-20 13:32:32.337216] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:32.923 [2024-11-20 13:32:32.337276] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:12:32.923 13:32:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.923 13:32:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:12:32.923 13:32:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.923 13:32:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.923 [2024-11-20 13:32:32.349234] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:32.923 [2024-11-20 13:32:32.351585] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:32.923 [2024-11-20 13:32:32.351636] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:32.923 13:32:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.923 13:32:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:12:32.923 13:32:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:32.923 13:32:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:12:32.923 13:32:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:32.923 13:32:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:32.923 13:32:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:32.923 13:32:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:32.923 13:32:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:32.923 13:32:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:32.923 13:32:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:32.923 13:32:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:32.923 13:32:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:32.923 13:32:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:32.923 13:32:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.923 13:32:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.923 13:32:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:32.923 13:32:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.923 13:32:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:32.923 "name": "Existed_Raid", 00:12:32.923 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:32.923 "strip_size_kb": 64, 00:12:32.923 "state": "configuring", 00:12:32.923 "raid_level": "raid0", 00:12:32.923 "superblock": false, 00:12:32.923 "num_base_bdevs": 2, 00:12:32.923 "num_base_bdevs_discovered": 1, 00:12:32.923 "num_base_bdevs_operational": 2, 00:12:32.923 "base_bdevs_list": [ 00:12:32.923 { 00:12:32.923 "name": "BaseBdev1", 00:12:32.923 "uuid": "accc471b-b200-4aba-9180-717136463e20", 00:12:32.923 "is_configured": true, 00:12:32.923 "data_offset": 0, 00:12:32.923 "data_size": 65536 00:12:32.923 }, 00:12:32.923 { 00:12:32.923 "name": "BaseBdev2", 00:12:32.923 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:32.923 "is_configured": false, 00:12:32.923 "data_offset": 0, 00:12:32.923 "data_size": 0 00:12:32.923 } 00:12:32.923 ] 00:12:32.923 }' 00:12:32.923 13:32:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:32.923 13:32:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.490 13:32:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:33.490 13:32:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.490 13:32:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.490 [2024-11-20 13:32:32.826345] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:33.490 [2024-11-20 13:32:32.826401] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:33.490 [2024-11-20 13:32:32.826412] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:12:33.490 [2024-11-20 13:32:32.826692] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:12:33.490 [2024-11-20 13:32:32.826884] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:33.490 [2024-11-20 13:32:32.826911] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:12:33.490 [2024-11-20 13:32:32.827226] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:33.490 BaseBdev2 00:12:33.490 13:32:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.490 13:32:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:12:33.490 13:32:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:12:33.490 13:32:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:33.490 13:32:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:33.490 13:32:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:33.490 13:32:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:33.490 13:32:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:33.490 13:32:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.490 13:32:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.490 13:32:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.490 13:32:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:33.490 13:32:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.490 13:32:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.490 [ 00:12:33.490 { 00:12:33.490 "name": "BaseBdev2", 00:12:33.490 "aliases": [ 00:12:33.490 "d62eca44-acde-4991-8f7d-e766733e7152" 00:12:33.490 ], 00:12:33.490 "product_name": "Malloc disk", 00:12:33.490 "block_size": 512, 00:12:33.490 "num_blocks": 65536, 00:12:33.490 "uuid": "d62eca44-acde-4991-8f7d-e766733e7152", 00:12:33.490 "assigned_rate_limits": { 00:12:33.490 "rw_ios_per_sec": 0, 00:12:33.490 "rw_mbytes_per_sec": 0, 00:12:33.490 "r_mbytes_per_sec": 0, 00:12:33.490 "w_mbytes_per_sec": 0 00:12:33.490 }, 00:12:33.490 "claimed": true, 00:12:33.490 "claim_type": "exclusive_write", 00:12:33.490 "zoned": false, 00:12:33.490 "supported_io_types": { 00:12:33.490 "read": true, 00:12:33.490 "write": true, 00:12:33.490 "unmap": true, 00:12:33.490 "flush": true, 00:12:33.490 "reset": true, 00:12:33.490 "nvme_admin": false, 00:12:33.490 "nvme_io": false, 00:12:33.490 "nvme_io_md": false, 00:12:33.490 "write_zeroes": true, 00:12:33.490 "zcopy": true, 00:12:33.490 "get_zone_info": false, 00:12:33.490 "zone_management": false, 00:12:33.490 "zone_append": false, 00:12:33.490 "compare": false, 00:12:33.490 "compare_and_write": false, 00:12:33.490 "abort": true, 00:12:33.490 "seek_hole": false, 00:12:33.490 "seek_data": false, 00:12:33.490 "copy": true, 00:12:33.490 "nvme_iov_md": false 00:12:33.490 }, 00:12:33.490 "memory_domains": [ 00:12:33.490 { 00:12:33.490 "dma_device_id": "system", 00:12:33.490 "dma_device_type": 1 00:12:33.490 }, 00:12:33.490 { 00:12:33.490 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:33.490 "dma_device_type": 2 00:12:33.490 } 00:12:33.490 ], 00:12:33.490 "driver_specific": {} 00:12:33.490 } 00:12:33.490 ] 00:12:33.490 13:32:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.490 13:32:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:33.490 13:32:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:33.490 13:32:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:33.490 13:32:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:12:33.490 13:32:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:33.490 13:32:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:33.490 13:32:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:33.490 13:32:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:33.490 13:32:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:33.490 13:32:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:33.490 13:32:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:33.490 13:32:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:33.490 13:32:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:33.490 13:32:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:33.490 13:32:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:33.490 13:32:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.490 13:32:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.490 13:32:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.490 13:32:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:33.490 "name": "Existed_Raid", 00:12:33.490 "uuid": "c42ff9c9-1812-43f0-bfb2-2a7d9c7fbdf1", 00:12:33.490 "strip_size_kb": 64, 00:12:33.490 "state": "online", 00:12:33.490 "raid_level": "raid0", 00:12:33.490 "superblock": false, 00:12:33.491 "num_base_bdevs": 2, 00:12:33.491 "num_base_bdevs_discovered": 2, 00:12:33.491 "num_base_bdevs_operational": 2, 00:12:33.491 "base_bdevs_list": [ 00:12:33.491 { 00:12:33.491 "name": "BaseBdev1", 00:12:33.491 "uuid": "accc471b-b200-4aba-9180-717136463e20", 00:12:33.491 "is_configured": true, 00:12:33.491 "data_offset": 0, 00:12:33.491 "data_size": 65536 00:12:33.491 }, 00:12:33.491 { 00:12:33.491 "name": "BaseBdev2", 00:12:33.491 "uuid": "d62eca44-acde-4991-8f7d-e766733e7152", 00:12:33.491 "is_configured": true, 00:12:33.491 "data_offset": 0, 00:12:33.491 "data_size": 65536 00:12:33.491 } 00:12:33.491 ] 00:12:33.491 }' 00:12:33.491 13:32:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:33.491 13:32:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.057 13:32:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:12:34.057 13:32:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:34.057 13:32:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:34.057 13:32:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:34.057 13:32:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:34.057 13:32:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:34.057 13:32:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:34.057 13:32:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:34.057 13:32:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.057 13:32:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.057 [2024-11-20 13:32:33.290459] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:34.057 13:32:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.057 13:32:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:34.057 "name": "Existed_Raid", 00:12:34.057 "aliases": [ 00:12:34.057 "c42ff9c9-1812-43f0-bfb2-2a7d9c7fbdf1" 00:12:34.057 ], 00:12:34.057 "product_name": "Raid Volume", 00:12:34.057 "block_size": 512, 00:12:34.057 "num_blocks": 131072, 00:12:34.057 "uuid": "c42ff9c9-1812-43f0-bfb2-2a7d9c7fbdf1", 00:12:34.057 "assigned_rate_limits": { 00:12:34.057 "rw_ios_per_sec": 0, 00:12:34.057 "rw_mbytes_per_sec": 0, 00:12:34.057 "r_mbytes_per_sec": 0, 00:12:34.057 "w_mbytes_per_sec": 0 00:12:34.057 }, 00:12:34.057 "claimed": false, 00:12:34.057 "zoned": false, 00:12:34.057 "supported_io_types": { 00:12:34.057 "read": true, 00:12:34.057 "write": true, 00:12:34.057 "unmap": true, 00:12:34.057 "flush": true, 00:12:34.057 "reset": true, 00:12:34.057 "nvme_admin": false, 00:12:34.057 "nvme_io": false, 00:12:34.057 "nvme_io_md": false, 00:12:34.058 "write_zeroes": true, 00:12:34.058 "zcopy": false, 00:12:34.058 "get_zone_info": false, 00:12:34.058 "zone_management": false, 00:12:34.058 "zone_append": false, 00:12:34.058 "compare": false, 00:12:34.058 "compare_and_write": false, 00:12:34.058 "abort": false, 00:12:34.058 "seek_hole": false, 00:12:34.058 "seek_data": false, 00:12:34.058 "copy": false, 00:12:34.058 "nvme_iov_md": false 00:12:34.058 }, 00:12:34.058 "memory_domains": [ 00:12:34.058 { 00:12:34.058 "dma_device_id": "system", 00:12:34.058 "dma_device_type": 1 00:12:34.058 }, 00:12:34.058 { 00:12:34.058 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:34.058 "dma_device_type": 2 00:12:34.058 }, 00:12:34.058 { 00:12:34.058 "dma_device_id": "system", 00:12:34.058 "dma_device_type": 1 00:12:34.058 }, 00:12:34.058 { 00:12:34.058 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:34.058 "dma_device_type": 2 00:12:34.058 } 00:12:34.058 ], 00:12:34.058 "driver_specific": { 00:12:34.058 "raid": { 00:12:34.058 "uuid": "c42ff9c9-1812-43f0-bfb2-2a7d9c7fbdf1", 00:12:34.058 "strip_size_kb": 64, 00:12:34.058 "state": "online", 00:12:34.058 "raid_level": "raid0", 00:12:34.058 "superblock": false, 00:12:34.058 "num_base_bdevs": 2, 00:12:34.058 "num_base_bdevs_discovered": 2, 00:12:34.058 "num_base_bdevs_operational": 2, 00:12:34.058 "base_bdevs_list": [ 00:12:34.058 { 00:12:34.058 "name": "BaseBdev1", 00:12:34.058 "uuid": "accc471b-b200-4aba-9180-717136463e20", 00:12:34.058 "is_configured": true, 00:12:34.058 "data_offset": 0, 00:12:34.058 "data_size": 65536 00:12:34.058 }, 00:12:34.058 { 00:12:34.058 "name": "BaseBdev2", 00:12:34.058 "uuid": "d62eca44-acde-4991-8f7d-e766733e7152", 00:12:34.058 "is_configured": true, 00:12:34.058 "data_offset": 0, 00:12:34.058 "data_size": 65536 00:12:34.058 } 00:12:34.058 ] 00:12:34.058 } 00:12:34.058 } 00:12:34.058 }' 00:12:34.058 13:32:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:34.058 13:32:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:12:34.058 BaseBdev2' 00:12:34.058 13:32:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:34.058 13:32:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:34.058 13:32:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:34.058 13:32:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:12:34.058 13:32:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.058 13:32:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.058 13:32:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:34.058 13:32:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.058 13:32:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:34.058 13:32:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:34.058 13:32:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:34.058 13:32:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:34.058 13:32:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.058 13:32:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.058 13:32:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:34.058 13:32:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.058 13:32:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:34.058 13:32:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:34.058 13:32:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:34.058 13:32:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.058 13:32:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.058 [2024-11-20 13:32:33.513942] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:34.058 [2024-11-20 13:32:33.513983] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:34.058 [2024-11-20 13:32:33.514034] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:34.317 13:32:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.317 13:32:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:12:34.317 13:32:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:12:34.317 13:32:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:34.317 13:32:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:12:34.317 13:32:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:12:34.317 13:32:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:12:34.317 13:32:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:34.317 13:32:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:12:34.317 13:32:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:34.317 13:32:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:34.317 13:32:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:34.317 13:32:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:34.317 13:32:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:34.317 13:32:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:34.317 13:32:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:34.317 13:32:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:34.317 13:32:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:34.317 13:32:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.317 13:32:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.317 13:32:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.317 13:32:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:34.317 "name": "Existed_Raid", 00:12:34.317 "uuid": "c42ff9c9-1812-43f0-bfb2-2a7d9c7fbdf1", 00:12:34.317 "strip_size_kb": 64, 00:12:34.317 "state": "offline", 00:12:34.317 "raid_level": "raid0", 00:12:34.317 "superblock": false, 00:12:34.317 "num_base_bdevs": 2, 00:12:34.317 "num_base_bdevs_discovered": 1, 00:12:34.317 "num_base_bdevs_operational": 1, 00:12:34.317 "base_bdevs_list": [ 00:12:34.317 { 00:12:34.317 "name": null, 00:12:34.317 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:34.317 "is_configured": false, 00:12:34.317 "data_offset": 0, 00:12:34.317 "data_size": 65536 00:12:34.317 }, 00:12:34.317 { 00:12:34.317 "name": "BaseBdev2", 00:12:34.317 "uuid": "d62eca44-acde-4991-8f7d-e766733e7152", 00:12:34.317 "is_configured": true, 00:12:34.317 "data_offset": 0, 00:12:34.317 "data_size": 65536 00:12:34.317 } 00:12:34.317 ] 00:12:34.317 }' 00:12:34.317 13:32:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:34.317 13:32:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.576 13:32:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:12:34.576 13:32:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:34.576 13:32:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:34.576 13:32:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.576 13:32:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.576 13:32:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:34.576 13:32:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.576 13:32:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:34.576 13:32:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:34.576 13:32:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:12:34.576 13:32:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.576 13:32:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.576 [2024-11-20 13:32:34.037207] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:34.576 [2024-11-20 13:32:34.037264] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:12:34.834 13:32:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.834 13:32:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:34.834 13:32:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:34.834 13:32:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:34.834 13:32:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:12:34.834 13:32:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.834 13:32:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.834 13:32:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.834 13:32:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:12:34.834 13:32:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:12:34.834 13:32:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:12:34.834 13:32:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 60491 00:12:34.834 13:32:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 60491 ']' 00:12:34.834 13:32:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 60491 00:12:34.834 13:32:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:12:34.834 13:32:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:34.834 13:32:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60491 00:12:34.834 13:32:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:34.834 13:32:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:34.834 killing process with pid 60491 00:12:34.834 13:32:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60491' 00:12:34.834 13:32:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 60491 00:12:34.834 [2024-11-20 13:32:34.224778] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:34.834 13:32:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 60491 00:12:34.834 [2024-11-20 13:32:34.241235] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:36.212 13:32:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:12:36.212 00:12:36.212 real 0m4.932s 00:12:36.212 user 0m7.052s 00:12:36.212 sys 0m0.890s 00:12:36.212 13:32:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:36.212 ************************************ 00:12:36.212 END TEST raid_state_function_test 00:12:36.212 ************************************ 00:12:36.212 13:32:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.212 13:32:35 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 2 true 00:12:36.212 13:32:35 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:12:36.212 13:32:35 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:36.212 13:32:35 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:36.212 ************************************ 00:12:36.212 START TEST raid_state_function_test_sb 00:12:36.212 ************************************ 00:12:36.212 13:32:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 2 true 00:12:36.212 13:32:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:12:36.212 13:32:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:12:36.212 13:32:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:12:36.212 13:32:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:12:36.212 13:32:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:12:36.212 13:32:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:36.212 13:32:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:12:36.212 13:32:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:36.212 13:32:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:36.212 13:32:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:12:36.212 13:32:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:36.212 13:32:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:36.212 13:32:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:12:36.212 13:32:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:12:36.212 13:32:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:12:36.212 13:32:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:12:36.212 13:32:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:12:36.212 13:32:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:12:36.212 13:32:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:12:36.212 13:32:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:12:36.212 13:32:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:12:36.212 13:32:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:12:36.212 13:32:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:12:36.212 13:32:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=60738 00:12:36.212 Process raid pid: 60738 00:12:36.212 13:32:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 60738' 00:12:36.212 13:32:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:12:36.212 13:32:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 60738 00:12:36.212 13:32:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 60738 ']' 00:12:36.212 13:32:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:36.212 13:32:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:36.212 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:36.212 13:32:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:36.212 13:32:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:36.212 13:32:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:36.212 [2024-11-20 13:32:35.555015] Starting SPDK v25.01-pre git sha1 82b85d9ca / DPDK 24.03.0 initialization... 00:12:36.212 [2024-11-20 13:32:35.555157] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:36.471 [2024-11-20 13:32:35.720930] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:36.471 [2024-11-20 13:32:35.847687] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:36.730 [2024-11-20 13:32:36.063078] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:36.730 [2024-11-20 13:32:36.063126] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:37.033 13:32:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:37.033 13:32:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:12:37.033 13:32:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:12:37.033 13:32:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.033 13:32:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:37.033 [2024-11-20 13:32:36.426912] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:37.033 [2024-11-20 13:32:36.426972] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:37.033 [2024-11-20 13:32:36.426985] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:37.033 [2024-11-20 13:32:36.426998] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:37.033 13:32:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.033 13:32:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:12:37.033 13:32:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:37.033 13:32:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:37.033 13:32:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:37.033 13:32:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:37.033 13:32:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:37.033 13:32:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:37.033 13:32:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:37.033 13:32:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:37.033 13:32:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:37.033 13:32:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:37.033 13:32:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.033 13:32:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:37.033 13:32:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:37.033 13:32:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.033 13:32:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:37.033 "name": "Existed_Raid", 00:12:37.033 "uuid": "3499b5f3-72a4-41ee-acaf-7277d57d2443", 00:12:37.033 "strip_size_kb": 64, 00:12:37.033 "state": "configuring", 00:12:37.033 "raid_level": "raid0", 00:12:37.033 "superblock": true, 00:12:37.033 "num_base_bdevs": 2, 00:12:37.033 "num_base_bdevs_discovered": 0, 00:12:37.033 "num_base_bdevs_operational": 2, 00:12:37.033 "base_bdevs_list": [ 00:12:37.033 { 00:12:37.033 "name": "BaseBdev1", 00:12:37.033 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:37.033 "is_configured": false, 00:12:37.033 "data_offset": 0, 00:12:37.033 "data_size": 0 00:12:37.033 }, 00:12:37.033 { 00:12:37.033 "name": "BaseBdev2", 00:12:37.033 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:37.033 "is_configured": false, 00:12:37.033 "data_offset": 0, 00:12:37.033 "data_size": 0 00:12:37.033 } 00:12:37.033 ] 00:12:37.033 }' 00:12:37.033 13:32:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:37.033 13:32:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:37.604 13:32:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:37.604 13:32:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.604 13:32:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:37.604 [2024-11-20 13:32:36.826444] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:37.604 [2024-11-20 13:32:36.826485] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:12:37.604 13:32:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.604 13:32:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:12:37.604 13:32:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.604 13:32:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:37.604 [2024-11-20 13:32:36.834433] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:37.605 [2024-11-20 13:32:36.834480] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:37.605 [2024-11-20 13:32:36.834491] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:37.605 [2024-11-20 13:32:36.834506] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:37.605 13:32:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.605 13:32:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:37.605 13:32:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.605 13:32:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:37.605 [2024-11-20 13:32:36.879225] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:37.605 BaseBdev1 00:12:37.605 13:32:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.605 13:32:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:12:37.605 13:32:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:12:37.605 13:32:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:37.605 13:32:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:37.605 13:32:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:37.605 13:32:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:37.605 13:32:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:37.605 13:32:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.605 13:32:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:37.605 13:32:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.605 13:32:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:37.605 13:32:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.605 13:32:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:37.605 [ 00:12:37.605 { 00:12:37.605 "name": "BaseBdev1", 00:12:37.605 "aliases": [ 00:12:37.605 "fc229c7d-fd4d-402b-9586-d274847b39d8" 00:12:37.605 ], 00:12:37.605 "product_name": "Malloc disk", 00:12:37.605 "block_size": 512, 00:12:37.605 "num_blocks": 65536, 00:12:37.605 "uuid": "fc229c7d-fd4d-402b-9586-d274847b39d8", 00:12:37.605 "assigned_rate_limits": { 00:12:37.605 "rw_ios_per_sec": 0, 00:12:37.605 "rw_mbytes_per_sec": 0, 00:12:37.605 "r_mbytes_per_sec": 0, 00:12:37.605 "w_mbytes_per_sec": 0 00:12:37.605 }, 00:12:37.605 "claimed": true, 00:12:37.605 "claim_type": "exclusive_write", 00:12:37.605 "zoned": false, 00:12:37.605 "supported_io_types": { 00:12:37.605 "read": true, 00:12:37.605 "write": true, 00:12:37.605 "unmap": true, 00:12:37.605 "flush": true, 00:12:37.605 "reset": true, 00:12:37.605 "nvme_admin": false, 00:12:37.605 "nvme_io": false, 00:12:37.605 "nvme_io_md": false, 00:12:37.605 "write_zeroes": true, 00:12:37.605 "zcopy": true, 00:12:37.605 "get_zone_info": false, 00:12:37.605 "zone_management": false, 00:12:37.605 "zone_append": false, 00:12:37.605 "compare": false, 00:12:37.605 "compare_and_write": false, 00:12:37.605 "abort": true, 00:12:37.605 "seek_hole": false, 00:12:37.605 "seek_data": false, 00:12:37.605 "copy": true, 00:12:37.605 "nvme_iov_md": false 00:12:37.605 }, 00:12:37.605 "memory_domains": [ 00:12:37.605 { 00:12:37.605 "dma_device_id": "system", 00:12:37.605 "dma_device_type": 1 00:12:37.605 }, 00:12:37.605 { 00:12:37.605 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:37.605 "dma_device_type": 2 00:12:37.605 } 00:12:37.605 ], 00:12:37.605 "driver_specific": {} 00:12:37.605 } 00:12:37.605 ] 00:12:37.605 13:32:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.605 13:32:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:37.605 13:32:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:12:37.605 13:32:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:37.605 13:32:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:37.605 13:32:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:37.605 13:32:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:37.605 13:32:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:37.605 13:32:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:37.605 13:32:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:37.605 13:32:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:37.605 13:32:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:37.605 13:32:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:37.605 13:32:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:37.605 13:32:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.605 13:32:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:37.605 13:32:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.605 13:32:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:37.605 "name": "Existed_Raid", 00:12:37.605 "uuid": "7c104fc9-8e49-4aa5-bf7a-c567f10fdd87", 00:12:37.605 "strip_size_kb": 64, 00:12:37.605 "state": "configuring", 00:12:37.605 "raid_level": "raid0", 00:12:37.605 "superblock": true, 00:12:37.605 "num_base_bdevs": 2, 00:12:37.605 "num_base_bdevs_discovered": 1, 00:12:37.605 "num_base_bdevs_operational": 2, 00:12:37.605 "base_bdevs_list": [ 00:12:37.605 { 00:12:37.605 "name": "BaseBdev1", 00:12:37.605 "uuid": "fc229c7d-fd4d-402b-9586-d274847b39d8", 00:12:37.605 "is_configured": true, 00:12:37.605 "data_offset": 2048, 00:12:37.605 "data_size": 63488 00:12:37.605 }, 00:12:37.605 { 00:12:37.605 "name": "BaseBdev2", 00:12:37.605 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:37.605 "is_configured": false, 00:12:37.605 "data_offset": 0, 00:12:37.605 "data_size": 0 00:12:37.605 } 00:12:37.605 ] 00:12:37.605 }' 00:12:37.605 13:32:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:37.605 13:32:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:38.174 13:32:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:38.174 13:32:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.174 13:32:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:38.174 [2024-11-20 13:32:37.366636] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:38.174 [2024-11-20 13:32:37.366695] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:12:38.174 13:32:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.174 13:32:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:12:38.174 13:32:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.174 13:32:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:38.174 [2024-11-20 13:32:37.374686] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:38.174 [2024-11-20 13:32:37.376816] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:38.174 [2024-11-20 13:32:37.376865] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:38.174 13:32:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.174 13:32:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:12:38.174 13:32:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:38.174 13:32:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:12:38.174 13:32:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:38.174 13:32:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:38.174 13:32:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:38.174 13:32:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:38.174 13:32:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:38.174 13:32:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:38.174 13:32:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:38.174 13:32:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:38.174 13:32:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:38.174 13:32:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:38.174 13:32:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.174 13:32:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:38.174 13:32:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:38.174 13:32:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.174 13:32:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:38.174 "name": "Existed_Raid", 00:12:38.175 "uuid": "25fad497-5525-47b8-a7b4-0cdad6dacb40", 00:12:38.175 "strip_size_kb": 64, 00:12:38.175 "state": "configuring", 00:12:38.175 "raid_level": "raid0", 00:12:38.175 "superblock": true, 00:12:38.175 "num_base_bdevs": 2, 00:12:38.175 "num_base_bdevs_discovered": 1, 00:12:38.175 "num_base_bdevs_operational": 2, 00:12:38.175 "base_bdevs_list": [ 00:12:38.175 { 00:12:38.175 "name": "BaseBdev1", 00:12:38.175 "uuid": "fc229c7d-fd4d-402b-9586-d274847b39d8", 00:12:38.175 "is_configured": true, 00:12:38.175 "data_offset": 2048, 00:12:38.175 "data_size": 63488 00:12:38.175 }, 00:12:38.175 { 00:12:38.175 "name": "BaseBdev2", 00:12:38.175 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:38.175 "is_configured": false, 00:12:38.175 "data_offset": 0, 00:12:38.175 "data_size": 0 00:12:38.175 } 00:12:38.175 ] 00:12:38.175 }' 00:12:38.175 13:32:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:38.175 13:32:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:38.435 13:32:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:38.435 13:32:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.435 13:32:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:38.435 [2024-11-20 13:32:37.834378] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:38.435 [2024-11-20 13:32:37.834648] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:38.435 [2024-11-20 13:32:37.834663] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:12:38.435 [2024-11-20 13:32:37.835032] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:12:38.435 BaseBdev2 00:12:38.435 [2024-11-20 13:32:37.835213] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:38.435 [2024-11-20 13:32:37.835230] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:12:38.435 [2024-11-20 13:32:37.835367] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:38.435 13:32:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.435 13:32:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:12:38.435 13:32:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:12:38.435 13:32:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:38.435 13:32:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:38.435 13:32:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:38.435 13:32:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:38.435 13:32:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:38.435 13:32:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.435 13:32:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:38.435 13:32:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.435 13:32:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:38.435 13:32:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.435 13:32:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:38.435 [ 00:12:38.435 { 00:12:38.435 "name": "BaseBdev2", 00:12:38.435 "aliases": [ 00:12:38.435 "9504fc5c-e845-42d4-b381-22542f82df65" 00:12:38.435 ], 00:12:38.435 "product_name": "Malloc disk", 00:12:38.435 "block_size": 512, 00:12:38.435 "num_blocks": 65536, 00:12:38.435 "uuid": "9504fc5c-e845-42d4-b381-22542f82df65", 00:12:38.435 "assigned_rate_limits": { 00:12:38.435 "rw_ios_per_sec": 0, 00:12:38.435 "rw_mbytes_per_sec": 0, 00:12:38.435 "r_mbytes_per_sec": 0, 00:12:38.435 "w_mbytes_per_sec": 0 00:12:38.435 }, 00:12:38.435 "claimed": true, 00:12:38.435 "claim_type": "exclusive_write", 00:12:38.435 "zoned": false, 00:12:38.435 "supported_io_types": { 00:12:38.435 "read": true, 00:12:38.435 "write": true, 00:12:38.435 "unmap": true, 00:12:38.435 "flush": true, 00:12:38.435 "reset": true, 00:12:38.435 "nvme_admin": false, 00:12:38.435 "nvme_io": false, 00:12:38.435 "nvme_io_md": false, 00:12:38.435 "write_zeroes": true, 00:12:38.435 "zcopy": true, 00:12:38.435 "get_zone_info": false, 00:12:38.435 "zone_management": false, 00:12:38.435 "zone_append": false, 00:12:38.435 "compare": false, 00:12:38.435 "compare_and_write": false, 00:12:38.435 "abort": true, 00:12:38.435 "seek_hole": false, 00:12:38.435 "seek_data": false, 00:12:38.435 "copy": true, 00:12:38.435 "nvme_iov_md": false 00:12:38.435 }, 00:12:38.435 "memory_domains": [ 00:12:38.435 { 00:12:38.435 "dma_device_id": "system", 00:12:38.435 "dma_device_type": 1 00:12:38.435 }, 00:12:38.435 { 00:12:38.435 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:38.435 "dma_device_type": 2 00:12:38.435 } 00:12:38.435 ], 00:12:38.435 "driver_specific": {} 00:12:38.435 } 00:12:38.435 ] 00:12:38.435 13:32:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.435 13:32:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:38.435 13:32:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:38.435 13:32:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:38.435 13:32:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:12:38.435 13:32:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:38.435 13:32:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:38.435 13:32:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:38.435 13:32:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:38.435 13:32:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:38.435 13:32:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:38.435 13:32:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:38.435 13:32:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:38.435 13:32:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:38.435 13:32:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:38.435 13:32:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:38.435 13:32:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.435 13:32:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:38.435 13:32:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.435 13:32:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:38.435 "name": "Existed_Raid", 00:12:38.435 "uuid": "25fad497-5525-47b8-a7b4-0cdad6dacb40", 00:12:38.435 "strip_size_kb": 64, 00:12:38.435 "state": "online", 00:12:38.435 "raid_level": "raid0", 00:12:38.435 "superblock": true, 00:12:38.435 "num_base_bdevs": 2, 00:12:38.435 "num_base_bdevs_discovered": 2, 00:12:38.435 "num_base_bdevs_operational": 2, 00:12:38.435 "base_bdevs_list": [ 00:12:38.435 { 00:12:38.435 "name": "BaseBdev1", 00:12:38.435 "uuid": "fc229c7d-fd4d-402b-9586-d274847b39d8", 00:12:38.435 "is_configured": true, 00:12:38.435 "data_offset": 2048, 00:12:38.435 "data_size": 63488 00:12:38.435 }, 00:12:38.435 { 00:12:38.435 "name": "BaseBdev2", 00:12:38.435 "uuid": "9504fc5c-e845-42d4-b381-22542f82df65", 00:12:38.435 "is_configured": true, 00:12:38.435 "data_offset": 2048, 00:12:38.435 "data_size": 63488 00:12:38.435 } 00:12:38.435 ] 00:12:38.435 }' 00:12:38.435 13:32:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:38.435 13:32:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:39.004 13:32:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:12:39.004 13:32:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:39.004 13:32:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:39.004 13:32:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:39.004 13:32:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:12:39.004 13:32:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:39.004 13:32:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:39.004 13:32:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:39.004 13:32:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.004 13:32:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:39.004 [2024-11-20 13:32:38.282659] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:39.004 13:32:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.004 13:32:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:39.004 "name": "Existed_Raid", 00:12:39.004 "aliases": [ 00:12:39.004 "25fad497-5525-47b8-a7b4-0cdad6dacb40" 00:12:39.004 ], 00:12:39.004 "product_name": "Raid Volume", 00:12:39.004 "block_size": 512, 00:12:39.004 "num_blocks": 126976, 00:12:39.004 "uuid": "25fad497-5525-47b8-a7b4-0cdad6dacb40", 00:12:39.004 "assigned_rate_limits": { 00:12:39.004 "rw_ios_per_sec": 0, 00:12:39.004 "rw_mbytes_per_sec": 0, 00:12:39.004 "r_mbytes_per_sec": 0, 00:12:39.004 "w_mbytes_per_sec": 0 00:12:39.004 }, 00:12:39.004 "claimed": false, 00:12:39.004 "zoned": false, 00:12:39.004 "supported_io_types": { 00:12:39.004 "read": true, 00:12:39.004 "write": true, 00:12:39.004 "unmap": true, 00:12:39.004 "flush": true, 00:12:39.004 "reset": true, 00:12:39.004 "nvme_admin": false, 00:12:39.004 "nvme_io": false, 00:12:39.004 "nvme_io_md": false, 00:12:39.004 "write_zeroes": true, 00:12:39.004 "zcopy": false, 00:12:39.004 "get_zone_info": false, 00:12:39.004 "zone_management": false, 00:12:39.005 "zone_append": false, 00:12:39.005 "compare": false, 00:12:39.005 "compare_and_write": false, 00:12:39.005 "abort": false, 00:12:39.005 "seek_hole": false, 00:12:39.005 "seek_data": false, 00:12:39.005 "copy": false, 00:12:39.005 "nvme_iov_md": false 00:12:39.005 }, 00:12:39.005 "memory_domains": [ 00:12:39.005 { 00:12:39.005 "dma_device_id": "system", 00:12:39.005 "dma_device_type": 1 00:12:39.005 }, 00:12:39.005 { 00:12:39.005 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:39.005 "dma_device_type": 2 00:12:39.005 }, 00:12:39.005 { 00:12:39.005 "dma_device_id": "system", 00:12:39.005 "dma_device_type": 1 00:12:39.005 }, 00:12:39.005 { 00:12:39.005 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:39.005 "dma_device_type": 2 00:12:39.005 } 00:12:39.005 ], 00:12:39.005 "driver_specific": { 00:12:39.005 "raid": { 00:12:39.005 "uuid": "25fad497-5525-47b8-a7b4-0cdad6dacb40", 00:12:39.005 "strip_size_kb": 64, 00:12:39.005 "state": "online", 00:12:39.005 "raid_level": "raid0", 00:12:39.005 "superblock": true, 00:12:39.005 "num_base_bdevs": 2, 00:12:39.005 "num_base_bdevs_discovered": 2, 00:12:39.005 "num_base_bdevs_operational": 2, 00:12:39.005 "base_bdevs_list": [ 00:12:39.005 { 00:12:39.005 "name": "BaseBdev1", 00:12:39.005 "uuid": "fc229c7d-fd4d-402b-9586-d274847b39d8", 00:12:39.005 "is_configured": true, 00:12:39.005 "data_offset": 2048, 00:12:39.005 "data_size": 63488 00:12:39.005 }, 00:12:39.005 { 00:12:39.005 "name": "BaseBdev2", 00:12:39.005 "uuid": "9504fc5c-e845-42d4-b381-22542f82df65", 00:12:39.005 "is_configured": true, 00:12:39.005 "data_offset": 2048, 00:12:39.005 "data_size": 63488 00:12:39.005 } 00:12:39.005 ] 00:12:39.005 } 00:12:39.005 } 00:12:39.005 }' 00:12:39.005 13:32:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:39.005 13:32:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:12:39.005 BaseBdev2' 00:12:39.005 13:32:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:39.005 13:32:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:39.005 13:32:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:39.005 13:32:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:12:39.005 13:32:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:39.005 13:32:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.005 13:32:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:39.005 13:32:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.005 13:32:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:39.005 13:32:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:39.005 13:32:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:39.005 13:32:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:39.005 13:32:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:39.005 13:32:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.005 13:32:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:39.005 13:32:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.005 13:32:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:39.005 13:32:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:39.005 13:32:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:39.005 13:32:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.005 13:32:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:39.005 [2024-11-20 13:32:38.486445] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:39.005 [2024-11-20 13:32:38.486486] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:39.005 [2024-11-20 13:32:38.486541] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:39.264 13:32:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.264 13:32:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:12:39.264 13:32:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:12:39.264 13:32:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:39.264 13:32:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:12:39.264 13:32:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:12:39.264 13:32:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:12:39.264 13:32:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:39.265 13:32:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:12:39.265 13:32:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:39.265 13:32:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:39.265 13:32:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:39.265 13:32:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:39.265 13:32:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:39.265 13:32:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:39.265 13:32:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:39.265 13:32:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:39.265 13:32:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.265 13:32:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:39.265 13:32:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:39.265 13:32:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.265 13:32:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:39.265 "name": "Existed_Raid", 00:12:39.265 "uuid": "25fad497-5525-47b8-a7b4-0cdad6dacb40", 00:12:39.265 "strip_size_kb": 64, 00:12:39.265 "state": "offline", 00:12:39.265 "raid_level": "raid0", 00:12:39.265 "superblock": true, 00:12:39.265 "num_base_bdevs": 2, 00:12:39.265 "num_base_bdevs_discovered": 1, 00:12:39.265 "num_base_bdevs_operational": 1, 00:12:39.265 "base_bdevs_list": [ 00:12:39.265 { 00:12:39.265 "name": null, 00:12:39.265 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:39.265 "is_configured": false, 00:12:39.265 "data_offset": 0, 00:12:39.265 "data_size": 63488 00:12:39.265 }, 00:12:39.265 { 00:12:39.265 "name": "BaseBdev2", 00:12:39.265 "uuid": "9504fc5c-e845-42d4-b381-22542f82df65", 00:12:39.265 "is_configured": true, 00:12:39.265 "data_offset": 2048, 00:12:39.265 "data_size": 63488 00:12:39.265 } 00:12:39.265 ] 00:12:39.265 }' 00:12:39.265 13:32:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:39.265 13:32:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:39.523 13:32:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:12:39.523 13:32:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:39.781 13:32:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:39.781 13:32:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:39.781 13:32:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.781 13:32:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:39.781 13:32:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.781 13:32:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:39.781 13:32:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:39.781 13:32:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:12:39.781 13:32:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.781 13:32:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:39.781 [2024-11-20 13:32:39.038899] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:39.781 [2024-11-20 13:32:39.038961] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:12:39.781 13:32:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.781 13:32:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:39.781 13:32:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:39.781 13:32:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:39.781 13:32:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.781 13:32:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:12:39.781 13:32:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:39.781 13:32:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.781 13:32:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:12:39.781 13:32:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:12:39.781 13:32:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:12:39.781 13:32:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 60738 00:12:39.781 13:32:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 60738 ']' 00:12:39.781 13:32:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 60738 00:12:39.781 13:32:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:12:39.781 13:32:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:39.781 13:32:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60738 00:12:39.781 killing process with pid 60738 00:12:39.781 13:32:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:39.782 13:32:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:39.782 13:32:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60738' 00:12:39.782 13:32:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 60738 00:12:39.782 [2024-11-20 13:32:39.212989] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:39.782 13:32:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 60738 00:12:39.782 [2024-11-20 13:32:39.230037] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:41.159 13:32:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:12:41.159 00:12:41.159 real 0m4.914s 00:12:41.159 user 0m7.014s 00:12:41.159 sys 0m0.908s 00:12:41.159 13:32:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:41.159 13:32:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:41.159 ************************************ 00:12:41.159 END TEST raid_state_function_test_sb 00:12:41.159 ************************************ 00:12:41.159 13:32:40 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 2 00:12:41.159 13:32:40 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:41.159 13:32:40 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:41.159 13:32:40 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:41.159 ************************************ 00:12:41.159 START TEST raid_superblock_test 00:12:41.159 ************************************ 00:12:41.159 13:32:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid0 2 00:12:41.159 13:32:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:12:41.159 13:32:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:12:41.159 13:32:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:12:41.159 13:32:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:12:41.159 13:32:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:12:41.159 13:32:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:12:41.159 13:32:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:12:41.159 13:32:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:12:41.159 13:32:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:12:41.159 13:32:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:12:41.159 13:32:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:12:41.159 13:32:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:12:41.159 13:32:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:12:41.159 13:32:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:12:41.159 13:32:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:12:41.159 13:32:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:12:41.159 13:32:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=60990 00:12:41.160 13:32:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:12:41.160 13:32:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 60990 00:12:41.160 13:32:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 60990 ']' 00:12:41.160 13:32:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:41.160 13:32:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:41.160 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:41.160 13:32:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:41.160 13:32:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:41.160 13:32:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.160 [2024-11-20 13:32:40.541165] Starting SPDK v25.01-pre git sha1 82b85d9ca / DPDK 24.03.0 initialization... 00:12:41.160 [2024-11-20 13:32:40.541300] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60990 ] 00:12:41.418 [2024-11-20 13:32:40.722118] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:41.418 [2024-11-20 13:32:40.839626] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:41.705 [2024-11-20 13:32:41.044973] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:41.705 [2024-11-20 13:32:41.045017] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:41.963 13:32:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:41.963 13:32:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:12:41.963 13:32:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:12:41.963 13:32:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:41.963 13:32:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:12:41.963 13:32:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:12:41.963 13:32:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:12:41.963 13:32:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:41.963 13:32:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:41.963 13:32:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:41.963 13:32:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:12:41.963 13:32:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.963 13:32:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.963 malloc1 00:12:41.963 13:32:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.963 13:32:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:41.963 13:32:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.963 13:32:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.963 [2024-11-20 13:32:41.437697] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:41.963 [2024-11-20 13:32:41.437886] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:41.963 [2024-11-20 13:32:41.437945] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:41.963 [2024-11-20 13:32:41.438030] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:41.963 [2024-11-20 13:32:41.440444] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:41.963 [2024-11-20 13:32:41.440600] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:41.963 pt1 00:12:41.963 13:32:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.963 13:32:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:41.963 13:32:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:41.963 13:32:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:12:41.963 13:32:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:12:41.963 13:32:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:12:41.963 13:32:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:41.963 13:32:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:41.963 13:32:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:41.963 13:32:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:12:41.963 13:32:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.963 13:32:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.222 malloc2 00:12:42.222 13:32:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.222 13:32:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:42.222 13:32:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.222 13:32:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.222 [2024-11-20 13:32:41.492373] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:42.222 [2024-11-20 13:32:41.492561] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:42.222 [2024-11-20 13:32:41.492632] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:42.222 [2024-11-20 13:32:41.492774] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:42.222 [2024-11-20 13:32:41.495199] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:42.222 [2024-11-20 13:32:41.495339] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:42.222 pt2 00:12:42.222 13:32:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.222 13:32:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:42.222 13:32:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:42.222 13:32:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:12:42.222 13:32:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.222 13:32:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.222 [2024-11-20 13:32:41.504419] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:42.222 [2024-11-20 13:32:41.506508] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:42.222 [2024-11-20 13:32:41.506800] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:12:42.222 [2024-11-20 13:32:41.506820] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:12:42.222 [2024-11-20 13:32:41.507121] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:12:42.222 [2024-11-20 13:32:41.507281] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:12:42.222 [2024-11-20 13:32:41.507294] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:12:42.222 [2024-11-20 13:32:41.507463] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:42.222 13:32:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.222 13:32:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:12:42.223 13:32:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:42.223 13:32:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:42.223 13:32:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:42.223 13:32:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:42.223 13:32:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:42.223 13:32:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:42.223 13:32:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:42.223 13:32:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:42.223 13:32:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:42.223 13:32:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:42.223 13:32:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:42.223 13:32:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.223 13:32:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.223 13:32:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.223 13:32:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:42.223 "name": "raid_bdev1", 00:12:42.223 "uuid": "2db15097-2e4a-4808-aef1-461354844643", 00:12:42.223 "strip_size_kb": 64, 00:12:42.223 "state": "online", 00:12:42.223 "raid_level": "raid0", 00:12:42.223 "superblock": true, 00:12:42.223 "num_base_bdevs": 2, 00:12:42.223 "num_base_bdevs_discovered": 2, 00:12:42.223 "num_base_bdevs_operational": 2, 00:12:42.223 "base_bdevs_list": [ 00:12:42.223 { 00:12:42.223 "name": "pt1", 00:12:42.223 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:42.223 "is_configured": true, 00:12:42.223 "data_offset": 2048, 00:12:42.223 "data_size": 63488 00:12:42.223 }, 00:12:42.223 { 00:12:42.223 "name": "pt2", 00:12:42.223 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:42.223 "is_configured": true, 00:12:42.223 "data_offset": 2048, 00:12:42.223 "data_size": 63488 00:12:42.223 } 00:12:42.223 ] 00:12:42.223 }' 00:12:42.223 13:32:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:42.223 13:32:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.483 13:32:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:12:42.483 13:32:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:12:42.483 13:32:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:42.483 13:32:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:42.483 13:32:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:42.483 13:32:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:42.483 13:32:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:42.483 13:32:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:42.483 13:32:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.483 13:32:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.483 [2024-11-20 13:32:41.948371] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:42.744 13:32:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.744 13:32:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:42.744 "name": "raid_bdev1", 00:12:42.744 "aliases": [ 00:12:42.744 "2db15097-2e4a-4808-aef1-461354844643" 00:12:42.744 ], 00:12:42.744 "product_name": "Raid Volume", 00:12:42.744 "block_size": 512, 00:12:42.744 "num_blocks": 126976, 00:12:42.744 "uuid": "2db15097-2e4a-4808-aef1-461354844643", 00:12:42.744 "assigned_rate_limits": { 00:12:42.744 "rw_ios_per_sec": 0, 00:12:42.744 "rw_mbytes_per_sec": 0, 00:12:42.744 "r_mbytes_per_sec": 0, 00:12:42.744 "w_mbytes_per_sec": 0 00:12:42.744 }, 00:12:42.744 "claimed": false, 00:12:42.744 "zoned": false, 00:12:42.744 "supported_io_types": { 00:12:42.744 "read": true, 00:12:42.744 "write": true, 00:12:42.744 "unmap": true, 00:12:42.744 "flush": true, 00:12:42.744 "reset": true, 00:12:42.744 "nvme_admin": false, 00:12:42.744 "nvme_io": false, 00:12:42.744 "nvme_io_md": false, 00:12:42.744 "write_zeroes": true, 00:12:42.744 "zcopy": false, 00:12:42.744 "get_zone_info": false, 00:12:42.744 "zone_management": false, 00:12:42.744 "zone_append": false, 00:12:42.744 "compare": false, 00:12:42.744 "compare_and_write": false, 00:12:42.744 "abort": false, 00:12:42.744 "seek_hole": false, 00:12:42.744 "seek_data": false, 00:12:42.744 "copy": false, 00:12:42.744 "nvme_iov_md": false 00:12:42.744 }, 00:12:42.744 "memory_domains": [ 00:12:42.744 { 00:12:42.744 "dma_device_id": "system", 00:12:42.744 "dma_device_type": 1 00:12:42.744 }, 00:12:42.744 { 00:12:42.744 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:42.744 "dma_device_type": 2 00:12:42.744 }, 00:12:42.744 { 00:12:42.744 "dma_device_id": "system", 00:12:42.744 "dma_device_type": 1 00:12:42.744 }, 00:12:42.744 { 00:12:42.744 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:42.744 "dma_device_type": 2 00:12:42.744 } 00:12:42.744 ], 00:12:42.744 "driver_specific": { 00:12:42.744 "raid": { 00:12:42.744 "uuid": "2db15097-2e4a-4808-aef1-461354844643", 00:12:42.744 "strip_size_kb": 64, 00:12:42.744 "state": "online", 00:12:42.744 "raid_level": "raid0", 00:12:42.744 "superblock": true, 00:12:42.744 "num_base_bdevs": 2, 00:12:42.744 "num_base_bdevs_discovered": 2, 00:12:42.744 "num_base_bdevs_operational": 2, 00:12:42.744 "base_bdevs_list": [ 00:12:42.744 { 00:12:42.744 "name": "pt1", 00:12:42.744 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:42.744 "is_configured": true, 00:12:42.744 "data_offset": 2048, 00:12:42.744 "data_size": 63488 00:12:42.744 }, 00:12:42.744 { 00:12:42.744 "name": "pt2", 00:12:42.744 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:42.744 "is_configured": true, 00:12:42.744 "data_offset": 2048, 00:12:42.744 "data_size": 63488 00:12:42.744 } 00:12:42.744 ] 00:12:42.744 } 00:12:42.744 } 00:12:42.744 }' 00:12:42.744 13:32:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:42.744 13:32:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:12:42.744 pt2' 00:12:42.744 13:32:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:42.744 13:32:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:42.744 13:32:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:42.744 13:32:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:12:42.744 13:32:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:42.744 13:32:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.744 13:32:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.744 13:32:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.744 13:32:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:42.744 13:32:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:42.744 13:32:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:42.744 13:32:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:12:42.744 13:32:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:42.744 13:32:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.744 13:32:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.744 13:32:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.744 13:32:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:42.744 13:32:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:42.744 13:32:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:12:42.744 13:32:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:42.744 13:32:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.744 13:32:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.744 [2024-11-20 13:32:42.187983] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:42.744 13:32:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.744 13:32:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=2db15097-2e4a-4808-aef1-461354844643 00:12:42.744 13:32:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 2db15097-2e4a-4808-aef1-461354844643 ']' 00:12:42.744 13:32:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:42.744 13:32:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.744 13:32:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.003 [2024-11-20 13:32:42.227649] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:43.003 [2024-11-20 13:32:42.227679] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:43.003 [2024-11-20 13:32:42.227764] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:43.003 [2024-11-20 13:32:42.227814] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:43.003 [2024-11-20 13:32:42.227830] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:12:43.003 13:32:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.003 13:32:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:12:43.003 13:32:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:43.003 13:32:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.003 13:32:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.003 13:32:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.004 13:32:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:12:43.004 13:32:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:12:43.004 13:32:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:43.004 13:32:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:12:43.004 13:32:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.004 13:32:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.004 13:32:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.004 13:32:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:43.004 13:32:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:12:43.004 13:32:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.004 13:32:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.004 13:32:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.004 13:32:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:12:43.004 13:32:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.004 13:32:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.004 13:32:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:12:43.004 13:32:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.004 13:32:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:12:43.004 13:32:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:12:43.004 13:32:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:12:43.004 13:32:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:12:43.004 13:32:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:12:43.004 13:32:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:43.004 13:32:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:12:43.004 13:32:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:43.004 13:32:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:12:43.004 13:32:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.004 13:32:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.004 [2024-11-20 13:32:42.359533] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:12:43.004 [2024-11-20 13:32:42.361808] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:12:43.004 [2024-11-20 13:32:42.361879] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:12:43.004 [2024-11-20 13:32:42.361938] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:12:43.004 [2024-11-20 13:32:42.361958] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:43.004 [2024-11-20 13:32:42.361975] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:12:43.004 request: 00:12:43.004 { 00:12:43.004 "name": "raid_bdev1", 00:12:43.004 "raid_level": "raid0", 00:12:43.004 "base_bdevs": [ 00:12:43.004 "malloc1", 00:12:43.004 "malloc2" 00:12:43.004 ], 00:12:43.004 "strip_size_kb": 64, 00:12:43.004 "superblock": false, 00:12:43.004 "method": "bdev_raid_create", 00:12:43.004 "req_id": 1 00:12:43.004 } 00:12:43.004 Got JSON-RPC error response 00:12:43.004 response: 00:12:43.004 { 00:12:43.004 "code": -17, 00:12:43.004 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:12:43.004 } 00:12:43.004 13:32:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:12:43.004 13:32:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:12:43.004 13:32:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:43.004 13:32:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:43.004 13:32:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:43.004 13:32:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:43.004 13:32:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.004 13:32:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.004 13:32:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:12:43.004 13:32:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.004 13:32:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:12:43.004 13:32:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:12:43.004 13:32:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:43.004 13:32:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.004 13:32:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.004 [2024-11-20 13:32:42.431414] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:43.004 [2024-11-20 13:32:42.431593] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:43.004 [2024-11-20 13:32:42.431646] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:12:43.004 [2024-11-20 13:32:42.431719] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:43.004 [2024-11-20 13:32:42.434237] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:43.004 [2024-11-20 13:32:42.434401] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:43.004 [2024-11-20 13:32:42.434621] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:12:43.004 [2024-11-20 13:32:42.434799] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:43.004 pt1 00:12:43.004 13:32:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.004 13:32:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 2 00:12:43.004 13:32:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:43.004 13:32:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:43.004 13:32:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:43.004 13:32:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:43.004 13:32:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:43.004 13:32:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:43.004 13:32:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:43.004 13:32:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:43.004 13:32:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:43.004 13:32:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:43.004 13:32:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.004 13:32:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:43.004 13:32:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.004 13:32:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.263 13:32:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:43.263 "name": "raid_bdev1", 00:12:43.263 "uuid": "2db15097-2e4a-4808-aef1-461354844643", 00:12:43.263 "strip_size_kb": 64, 00:12:43.263 "state": "configuring", 00:12:43.263 "raid_level": "raid0", 00:12:43.263 "superblock": true, 00:12:43.263 "num_base_bdevs": 2, 00:12:43.263 "num_base_bdevs_discovered": 1, 00:12:43.263 "num_base_bdevs_operational": 2, 00:12:43.263 "base_bdevs_list": [ 00:12:43.263 { 00:12:43.263 "name": "pt1", 00:12:43.263 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:43.263 "is_configured": true, 00:12:43.263 "data_offset": 2048, 00:12:43.263 "data_size": 63488 00:12:43.263 }, 00:12:43.263 { 00:12:43.263 "name": null, 00:12:43.263 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:43.263 "is_configured": false, 00:12:43.263 "data_offset": 2048, 00:12:43.263 "data_size": 63488 00:12:43.263 } 00:12:43.263 ] 00:12:43.263 }' 00:12:43.263 13:32:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:43.263 13:32:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.521 13:32:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:12:43.521 13:32:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:12:43.522 13:32:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:43.522 13:32:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:43.522 13:32:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.522 13:32:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.522 [2024-11-20 13:32:42.906770] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:43.522 [2024-11-20 13:32:42.906848] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:43.522 [2024-11-20 13:32:42.906872] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:12:43.522 [2024-11-20 13:32:42.906887] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:43.522 [2024-11-20 13:32:42.907382] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:43.522 [2024-11-20 13:32:42.907413] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:43.522 [2024-11-20 13:32:42.907496] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:43.522 [2024-11-20 13:32:42.907526] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:43.522 [2024-11-20 13:32:42.907635] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:43.522 [2024-11-20 13:32:42.907649] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:12:43.522 [2024-11-20 13:32:42.907912] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:12:43.522 [2024-11-20 13:32:42.908077] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:43.522 [2024-11-20 13:32:42.908088] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:12:43.522 [2024-11-20 13:32:42.908241] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:43.522 pt2 00:12:43.522 13:32:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.522 13:32:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:12:43.522 13:32:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:43.522 13:32:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:12:43.522 13:32:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:43.522 13:32:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:43.522 13:32:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:43.522 13:32:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:43.522 13:32:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:43.522 13:32:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:43.522 13:32:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:43.522 13:32:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:43.522 13:32:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:43.522 13:32:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:43.522 13:32:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.522 13:32:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.522 13:32:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:43.522 13:32:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.522 13:32:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:43.522 "name": "raid_bdev1", 00:12:43.522 "uuid": "2db15097-2e4a-4808-aef1-461354844643", 00:12:43.522 "strip_size_kb": 64, 00:12:43.522 "state": "online", 00:12:43.522 "raid_level": "raid0", 00:12:43.522 "superblock": true, 00:12:43.522 "num_base_bdevs": 2, 00:12:43.522 "num_base_bdevs_discovered": 2, 00:12:43.522 "num_base_bdevs_operational": 2, 00:12:43.522 "base_bdevs_list": [ 00:12:43.522 { 00:12:43.522 "name": "pt1", 00:12:43.522 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:43.522 "is_configured": true, 00:12:43.522 "data_offset": 2048, 00:12:43.522 "data_size": 63488 00:12:43.522 }, 00:12:43.522 { 00:12:43.522 "name": "pt2", 00:12:43.522 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:43.522 "is_configured": true, 00:12:43.522 "data_offset": 2048, 00:12:43.522 "data_size": 63488 00:12:43.522 } 00:12:43.522 ] 00:12:43.522 }' 00:12:43.522 13:32:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:43.522 13:32:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.091 13:32:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:12:44.091 13:32:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:12:44.091 13:32:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:44.091 13:32:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:44.091 13:32:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:44.091 13:32:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:44.091 13:32:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:44.091 13:32:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:44.091 13:32:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.091 13:32:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.091 [2024-11-20 13:32:43.322675] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:44.091 13:32:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.091 13:32:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:44.091 "name": "raid_bdev1", 00:12:44.091 "aliases": [ 00:12:44.091 "2db15097-2e4a-4808-aef1-461354844643" 00:12:44.091 ], 00:12:44.091 "product_name": "Raid Volume", 00:12:44.091 "block_size": 512, 00:12:44.091 "num_blocks": 126976, 00:12:44.091 "uuid": "2db15097-2e4a-4808-aef1-461354844643", 00:12:44.091 "assigned_rate_limits": { 00:12:44.091 "rw_ios_per_sec": 0, 00:12:44.091 "rw_mbytes_per_sec": 0, 00:12:44.091 "r_mbytes_per_sec": 0, 00:12:44.091 "w_mbytes_per_sec": 0 00:12:44.091 }, 00:12:44.091 "claimed": false, 00:12:44.091 "zoned": false, 00:12:44.091 "supported_io_types": { 00:12:44.091 "read": true, 00:12:44.091 "write": true, 00:12:44.091 "unmap": true, 00:12:44.091 "flush": true, 00:12:44.091 "reset": true, 00:12:44.091 "nvme_admin": false, 00:12:44.091 "nvme_io": false, 00:12:44.091 "nvme_io_md": false, 00:12:44.091 "write_zeroes": true, 00:12:44.091 "zcopy": false, 00:12:44.091 "get_zone_info": false, 00:12:44.091 "zone_management": false, 00:12:44.091 "zone_append": false, 00:12:44.091 "compare": false, 00:12:44.091 "compare_and_write": false, 00:12:44.091 "abort": false, 00:12:44.091 "seek_hole": false, 00:12:44.091 "seek_data": false, 00:12:44.091 "copy": false, 00:12:44.091 "nvme_iov_md": false 00:12:44.091 }, 00:12:44.091 "memory_domains": [ 00:12:44.091 { 00:12:44.091 "dma_device_id": "system", 00:12:44.091 "dma_device_type": 1 00:12:44.091 }, 00:12:44.091 { 00:12:44.091 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:44.091 "dma_device_type": 2 00:12:44.091 }, 00:12:44.091 { 00:12:44.091 "dma_device_id": "system", 00:12:44.091 "dma_device_type": 1 00:12:44.091 }, 00:12:44.091 { 00:12:44.091 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:44.091 "dma_device_type": 2 00:12:44.091 } 00:12:44.091 ], 00:12:44.091 "driver_specific": { 00:12:44.091 "raid": { 00:12:44.091 "uuid": "2db15097-2e4a-4808-aef1-461354844643", 00:12:44.091 "strip_size_kb": 64, 00:12:44.091 "state": "online", 00:12:44.091 "raid_level": "raid0", 00:12:44.091 "superblock": true, 00:12:44.091 "num_base_bdevs": 2, 00:12:44.091 "num_base_bdevs_discovered": 2, 00:12:44.091 "num_base_bdevs_operational": 2, 00:12:44.091 "base_bdevs_list": [ 00:12:44.091 { 00:12:44.091 "name": "pt1", 00:12:44.091 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:44.091 "is_configured": true, 00:12:44.091 "data_offset": 2048, 00:12:44.091 "data_size": 63488 00:12:44.091 }, 00:12:44.091 { 00:12:44.091 "name": "pt2", 00:12:44.091 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:44.091 "is_configured": true, 00:12:44.091 "data_offset": 2048, 00:12:44.091 "data_size": 63488 00:12:44.091 } 00:12:44.091 ] 00:12:44.091 } 00:12:44.091 } 00:12:44.091 }' 00:12:44.091 13:32:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:44.091 13:32:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:12:44.091 pt2' 00:12:44.091 13:32:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:44.091 13:32:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:44.091 13:32:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:44.091 13:32:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:12:44.091 13:32:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.091 13:32:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:44.092 13:32:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.092 13:32:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.092 13:32:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:44.092 13:32:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:44.092 13:32:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:44.092 13:32:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:12:44.092 13:32:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:44.092 13:32:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.092 13:32:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.092 13:32:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.092 13:32:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:44.092 13:32:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:44.092 13:32:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:44.092 13:32:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.092 13:32:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.092 13:32:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:12:44.092 [2024-11-20 13:32:43.558670] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:44.350 13:32:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.350 13:32:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 2db15097-2e4a-4808-aef1-461354844643 '!=' 2db15097-2e4a-4808-aef1-461354844643 ']' 00:12:44.350 13:32:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:12:44.350 13:32:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:44.351 13:32:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:12:44.351 13:32:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 60990 00:12:44.351 13:32:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 60990 ']' 00:12:44.351 13:32:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 60990 00:12:44.351 13:32:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:12:44.351 13:32:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:44.351 13:32:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60990 00:12:44.351 killing process with pid 60990 00:12:44.351 13:32:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:44.351 13:32:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:44.351 13:32:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60990' 00:12:44.351 13:32:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 60990 00:12:44.351 [2024-11-20 13:32:43.641421] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:44.351 [2024-11-20 13:32:43.641540] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:44.351 13:32:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 60990 00:12:44.351 [2024-11-20 13:32:43.641600] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:44.351 [2024-11-20 13:32:43.641618] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:12:44.609 [2024-11-20 13:32:43.867119] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:45.984 13:32:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:12:45.984 00:12:45.984 real 0m4.633s 00:12:45.984 user 0m6.462s 00:12:45.984 sys 0m0.864s 00:12:45.984 13:32:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:45.984 13:32:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.984 ************************************ 00:12:45.984 END TEST raid_superblock_test 00:12:45.984 ************************************ 00:12:45.984 13:32:45 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 2 read 00:12:45.984 13:32:45 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:12:45.984 13:32:45 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:45.984 13:32:45 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:45.984 ************************************ 00:12:45.984 START TEST raid_read_error_test 00:12:45.984 ************************************ 00:12:45.984 13:32:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 2 read 00:12:45.984 13:32:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:12:45.984 13:32:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:12:45.984 13:32:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:12:45.984 13:32:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:12:45.984 13:32:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:45.984 13:32:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:12:45.984 13:32:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:45.984 13:32:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:45.984 13:32:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:12:45.984 13:32:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:45.984 13:32:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:45.984 13:32:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:12:45.984 13:32:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:12:45.984 13:32:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:12:45.984 13:32:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:12:45.984 13:32:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:12:45.984 13:32:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:12:45.984 13:32:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:12:45.984 13:32:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:12:45.984 13:32:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:12:45.984 13:32:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:12:45.984 13:32:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:12:45.984 13:32:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.2SvGe2JWFo 00:12:45.984 13:32:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=61202 00:12:45.984 13:32:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 61202 00:12:45.984 13:32:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:12:45.984 13:32:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 61202 ']' 00:12:45.984 13:32:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:45.984 13:32:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:45.984 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:45.984 13:32:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:45.984 13:32:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:45.984 13:32:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.984 [2024-11-20 13:32:45.269092] Starting SPDK v25.01-pre git sha1 82b85d9ca / DPDK 24.03.0 initialization... 00:12:45.984 [2024-11-20 13:32:45.269459] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61202 ] 00:12:45.984 [2024-11-20 13:32:45.458328] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:46.243 [2024-11-20 13:32:45.596535] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:46.500 [2024-11-20 13:32:45.855772] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:46.500 [2024-11-20 13:32:45.855870] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:46.759 13:32:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:46.759 13:32:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:12:46.759 13:32:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:46.759 13:32:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:46.759 13:32:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.759 13:32:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.759 BaseBdev1_malloc 00:12:46.759 13:32:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.759 13:32:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:12:46.759 13:32:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.759 13:32:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.759 true 00:12:46.759 13:32:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.759 13:32:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:12:46.759 13:32:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.759 13:32:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.759 [2024-11-20 13:32:46.208852] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:12:46.759 [2024-11-20 13:32:46.208937] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:46.759 [2024-11-20 13:32:46.208965] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:12:46.759 [2024-11-20 13:32:46.208981] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:46.759 [2024-11-20 13:32:46.211838] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:46.759 [2024-11-20 13:32:46.211887] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:46.759 BaseBdev1 00:12:46.759 13:32:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.759 13:32:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:46.759 13:32:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:46.759 13:32:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.759 13:32:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.021 BaseBdev2_malloc 00:12:47.021 13:32:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.021 13:32:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:12:47.021 13:32:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.021 13:32:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.021 true 00:12:47.021 13:32:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.021 13:32:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:12:47.021 13:32:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.021 13:32:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.021 [2024-11-20 13:32:46.286647] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:12:47.021 [2024-11-20 13:32:46.286736] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:47.021 [2024-11-20 13:32:46.286761] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:12:47.021 [2024-11-20 13:32:46.286777] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:47.021 [2024-11-20 13:32:46.289697] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:47.021 [2024-11-20 13:32:46.289744] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:47.021 BaseBdev2 00:12:47.021 13:32:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.021 13:32:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:12:47.021 13:32:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.021 13:32:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.021 [2024-11-20 13:32:46.294721] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:47.021 [2024-11-20 13:32:46.297793] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:47.021 [2024-11-20 13:32:46.298252] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:47.021 [2024-11-20 13:32:46.298298] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:12:47.021 [2024-11-20 13:32:46.298677] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:12:47.021 [2024-11-20 13:32:46.298901] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:47.021 [2024-11-20 13:32:46.298920] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:12:47.021 [2024-11-20 13:32:46.299236] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:47.021 13:32:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.021 13:32:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:12:47.021 13:32:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:47.021 13:32:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:47.021 13:32:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:47.021 13:32:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:47.021 13:32:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:47.021 13:32:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:47.021 13:32:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:47.021 13:32:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:47.021 13:32:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:47.021 13:32:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:47.021 13:32:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.021 13:32:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.021 13:32:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:47.021 13:32:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.021 13:32:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:47.021 "name": "raid_bdev1", 00:12:47.021 "uuid": "5c6da89c-7967-4d6f-8bd1-ddb03265b1c3", 00:12:47.021 "strip_size_kb": 64, 00:12:47.021 "state": "online", 00:12:47.021 "raid_level": "raid0", 00:12:47.021 "superblock": true, 00:12:47.021 "num_base_bdevs": 2, 00:12:47.021 "num_base_bdevs_discovered": 2, 00:12:47.021 "num_base_bdevs_operational": 2, 00:12:47.021 "base_bdevs_list": [ 00:12:47.021 { 00:12:47.021 "name": "BaseBdev1", 00:12:47.021 "uuid": "8c28f958-455a-51d7-bdca-51b1fa8fde79", 00:12:47.021 "is_configured": true, 00:12:47.021 "data_offset": 2048, 00:12:47.021 "data_size": 63488 00:12:47.021 }, 00:12:47.021 { 00:12:47.021 "name": "BaseBdev2", 00:12:47.021 "uuid": "10d13f55-2764-5ffe-a30e-6384985952e9", 00:12:47.021 "is_configured": true, 00:12:47.021 "data_offset": 2048, 00:12:47.021 "data_size": 63488 00:12:47.021 } 00:12:47.021 ] 00:12:47.021 }' 00:12:47.021 13:32:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:47.021 13:32:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.280 13:32:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:12:47.280 13:32:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:47.538 [2024-11-20 13:32:46.856079] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:12:48.474 13:32:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:12:48.474 13:32:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.474 13:32:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.474 13:32:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.474 13:32:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:12:48.474 13:32:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:12:48.474 13:32:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:12:48.474 13:32:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:12:48.474 13:32:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:48.474 13:32:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:48.474 13:32:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:48.474 13:32:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:48.474 13:32:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:48.474 13:32:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:48.474 13:32:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:48.474 13:32:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:48.474 13:32:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:48.474 13:32:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:48.474 13:32:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:48.474 13:32:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.474 13:32:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.474 13:32:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.474 13:32:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:48.474 "name": "raid_bdev1", 00:12:48.474 "uuid": "5c6da89c-7967-4d6f-8bd1-ddb03265b1c3", 00:12:48.474 "strip_size_kb": 64, 00:12:48.474 "state": "online", 00:12:48.474 "raid_level": "raid0", 00:12:48.474 "superblock": true, 00:12:48.474 "num_base_bdevs": 2, 00:12:48.474 "num_base_bdevs_discovered": 2, 00:12:48.474 "num_base_bdevs_operational": 2, 00:12:48.474 "base_bdevs_list": [ 00:12:48.474 { 00:12:48.474 "name": "BaseBdev1", 00:12:48.474 "uuid": "8c28f958-455a-51d7-bdca-51b1fa8fde79", 00:12:48.474 "is_configured": true, 00:12:48.474 "data_offset": 2048, 00:12:48.474 "data_size": 63488 00:12:48.474 }, 00:12:48.474 { 00:12:48.474 "name": "BaseBdev2", 00:12:48.474 "uuid": "10d13f55-2764-5ffe-a30e-6384985952e9", 00:12:48.474 "is_configured": true, 00:12:48.474 "data_offset": 2048, 00:12:48.474 "data_size": 63488 00:12:48.474 } 00:12:48.474 ] 00:12:48.474 }' 00:12:48.474 13:32:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:48.474 13:32:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.733 13:32:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:48.733 13:32:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.733 13:32:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.733 [2024-11-20 13:32:48.209719] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:48.733 [2024-11-20 13:32:48.209781] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:48.733 [2024-11-20 13:32:48.212542] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:48.733 [2024-11-20 13:32:48.212598] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:48.733 [2024-11-20 13:32:48.212637] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:48.733 [2024-11-20 13:32:48.212653] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:12:48.733 { 00:12:48.733 "results": [ 00:12:48.733 { 00:12:48.733 "job": "raid_bdev1", 00:12:48.733 "core_mask": "0x1", 00:12:48.733 "workload": "randrw", 00:12:48.733 "percentage": 50, 00:12:48.733 "status": "finished", 00:12:48.733 "queue_depth": 1, 00:12:48.733 "io_size": 131072, 00:12:48.733 "runtime": 1.353166, 00:12:48.733 "iops": 14271.715369732909, 00:12:48.733 "mibps": 1783.9644212166136, 00:12:48.733 "io_failed": 1, 00:12:48.733 "io_timeout": 0, 00:12:48.733 "avg_latency_us": 97.85524044087082, 00:12:48.733 "min_latency_us": 26.936546184738955, 00:12:48.733 "max_latency_us": 1421.2626506024096 00:12:48.733 } 00:12:48.733 ], 00:12:48.733 "core_count": 1 00:12:48.733 } 00:12:48.733 13:32:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.733 13:32:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 61202 00:12:48.733 13:32:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 61202 ']' 00:12:48.733 13:32:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 61202 00:12:48.992 13:32:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:12:48.992 13:32:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:48.992 13:32:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61202 00:12:48.992 killing process with pid 61202 00:12:48.992 13:32:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:48.992 13:32:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:48.992 13:32:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61202' 00:12:48.992 13:32:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 61202 00:12:48.992 [2024-11-20 13:32:48.263189] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:48.992 13:32:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 61202 00:12:48.992 [2024-11-20 13:32:48.410369] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:50.370 13:32:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.2SvGe2JWFo 00:12:50.370 13:32:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:12:50.370 13:32:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:12:50.370 13:32:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.74 00:12:50.370 13:32:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:12:50.370 ************************************ 00:12:50.370 END TEST raid_read_error_test 00:12:50.370 ************************************ 00:12:50.370 13:32:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:50.370 13:32:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:12:50.370 13:32:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.74 != \0\.\0\0 ]] 00:12:50.370 00:12:50.370 real 0m4.559s 00:12:50.370 user 0m5.306s 00:12:50.370 sys 0m0.731s 00:12:50.370 13:32:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:50.370 13:32:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.370 13:32:49 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 2 write 00:12:50.370 13:32:49 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:12:50.370 13:32:49 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:50.370 13:32:49 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:50.370 ************************************ 00:12:50.370 START TEST raid_write_error_test 00:12:50.370 ************************************ 00:12:50.370 13:32:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 2 write 00:12:50.370 13:32:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:12:50.370 13:32:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:12:50.370 13:32:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:12:50.370 13:32:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:12:50.370 13:32:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:50.370 13:32:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:12:50.370 13:32:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:50.370 13:32:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:50.370 13:32:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:12:50.370 13:32:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:50.370 13:32:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:50.370 13:32:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:12:50.370 13:32:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:12:50.370 13:32:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:12:50.370 13:32:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:12:50.371 13:32:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:12:50.371 13:32:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:12:50.371 13:32:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:12:50.371 13:32:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:12:50.371 13:32:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:12:50.371 13:32:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:12:50.371 13:32:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:12:50.371 13:32:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.OYidzG6aMf 00:12:50.371 13:32:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=61347 00:12:50.371 13:32:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 61347 00:12:50.371 13:32:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:12:50.371 13:32:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 61347 ']' 00:12:50.371 13:32:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:50.371 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:50.371 13:32:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:50.371 13:32:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:50.371 13:32:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:50.371 13:32:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.630 [2024-11-20 13:32:49.887223] Starting SPDK v25.01-pre git sha1 82b85d9ca / DPDK 24.03.0 initialization... 00:12:50.630 [2024-11-20 13:32:49.887352] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61347 ] 00:12:50.630 [2024-11-20 13:32:50.067288] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:50.889 [2024-11-20 13:32:50.181084] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:51.148 [2024-11-20 13:32:50.383328] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:51.148 [2024-11-20 13:32:50.383385] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:51.408 13:32:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:51.408 13:32:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:12:51.408 13:32:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:51.408 13:32:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:51.408 13:32:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.408 13:32:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:51.408 BaseBdev1_malloc 00:12:51.408 13:32:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.408 13:32:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:12:51.408 13:32:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.408 13:32:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:51.408 true 00:12:51.408 13:32:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.408 13:32:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:12:51.408 13:32:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.408 13:32:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:51.408 [2024-11-20 13:32:50.785248] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:12:51.408 [2024-11-20 13:32:50.785418] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:51.408 [2024-11-20 13:32:50.785449] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:12:51.408 [2024-11-20 13:32:50.785464] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:51.408 [2024-11-20 13:32:50.787773] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:51.408 [2024-11-20 13:32:50.787817] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:51.408 BaseBdev1 00:12:51.408 13:32:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.408 13:32:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:51.408 13:32:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:51.408 13:32:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.408 13:32:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:51.408 BaseBdev2_malloc 00:12:51.408 13:32:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.408 13:32:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:12:51.408 13:32:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.408 13:32:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:51.408 true 00:12:51.408 13:32:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.408 13:32:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:12:51.408 13:32:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.408 13:32:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:51.408 [2024-11-20 13:32:50.842203] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:12:51.408 [2024-11-20 13:32:50.842256] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:51.408 [2024-11-20 13:32:50.842282] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:12:51.408 [2024-11-20 13:32:50.842296] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:51.408 [2024-11-20 13:32:50.844587] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:51.408 [2024-11-20 13:32:50.844744] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:51.408 BaseBdev2 00:12:51.408 13:32:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.408 13:32:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:12:51.408 13:32:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.408 13:32:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:51.408 [2024-11-20 13:32:50.850251] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:51.408 [2024-11-20 13:32:50.852314] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:51.408 [2024-11-20 13:32:50.852492] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:51.408 [2024-11-20 13:32:50.852512] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:12:51.408 [2024-11-20 13:32:50.852751] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:12:51.408 [2024-11-20 13:32:50.852908] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:51.408 [2024-11-20 13:32:50.852923] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:12:51.408 [2024-11-20 13:32:50.853089] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:51.408 13:32:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.408 13:32:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:12:51.408 13:32:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:51.408 13:32:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:51.408 13:32:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:51.408 13:32:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:51.408 13:32:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:51.408 13:32:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:51.408 13:32:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:51.408 13:32:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:51.408 13:32:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:51.408 13:32:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:51.408 13:32:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:51.408 13:32:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.408 13:32:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:51.408 13:32:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.667 13:32:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:51.667 "name": "raid_bdev1", 00:12:51.667 "uuid": "66670342-76bf-4ced-8047-dc1847caacfd", 00:12:51.667 "strip_size_kb": 64, 00:12:51.667 "state": "online", 00:12:51.667 "raid_level": "raid0", 00:12:51.667 "superblock": true, 00:12:51.667 "num_base_bdevs": 2, 00:12:51.668 "num_base_bdevs_discovered": 2, 00:12:51.668 "num_base_bdevs_operational": 2, 00:12:51.668 "base_bdevs_list": [ 00:12:51.668 { 00:12:51.668 "name": "BaseBdev1", 00:12:51.668 "uuid": "4ccca638-7045-5e24-9570-e031e6cc915f", 00:12:51.668 "is_configured": true, 00:12:51.668 "data_offset": 2048, 00:12:51.668 "data_size": 63488 00:12:51.668 }, 00:12:51.668 { 00:12:51.668 "name": "BaseBdev2", 00:12:51.668 "uuid": "95963b03-21af-5029-aa85-238f615a93a8", 00:12:51.668 "is_configured": true, 00:12:51.668 "data_offset": 2048, 00:12:51.668 "data_size": 63488 00:12:51.668 } 00:12:51.668 ] 00:12:51.668 }' 00:12:51.668 13:32:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:51.668 13:32:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:51.927 13:32:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:12:51.927 13:32:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:51.927 [2024-11-20 13:32:51.367124] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:12:52.875 13:32:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:12:52.875 13:32:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.875 13:32:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:52.875 13:32:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.875 13:32:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:12:52.875 13:32:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:12:52.875 13:32:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:12:52.875 13:32:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:12:52.875 13:32:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:52.875 13:32:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:52.875 13:32:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:52.875 13:32:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:52.875 13:32:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:52.875 13:32:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:52.875 13:32:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:52.875 13:32:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:52.875 13:32:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:52.875 13:32:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:52.875 13:32:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:52.875 13:32:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.875 13:32:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:52.875 13:32:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.875 13:32:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:52.875 "name": "raid_bdev1", 00:12:52.875 "uuid": "66670342-76bf-4ced-8047-dc1847caacfd", 00:12:52.875 "strip_size_kb": 64, 00:12:52.875 "state": "online", 00:12:52.875 "raid_level": "raid0", 00:12:52.875 "superblock": true, 00:12:52.875 "num_base_bdevs": 2, 00:12:52.875 "num_base_bdevs_discovered": 2, 00:12:52.875 "num_base_bdevs_operational": 2, 00:12:52.875 "base_bdevs_list": [ 00:12:52.875 { 00:12:52.875 "name": "BaseBdev1", 00:12:52.875 "uuid": "4ccca638-7045-5e24-9570-e031e6cc915f", 00:12:52.875 "is_configured": true, 00:12:52.875 "data_offset": 2048, 00:12:52.875 "data_size": 63488 00:12:52.875 }, 00:12:52.875 { 00:12:52.875 "name": "BaseBdev2", 00:12:52.875 "uuid": "95963b03-21af-5029-aa85-238f615a93a8", 00:12:52.875 "is_configured": true, 00:12:52.875 "data_offset": 2048, 00:12:52.875 "data_size": 63488 00:12:52.875 } 00:12:52.875 ] 00:12:52.875 }' 00:12:52.875 13:32:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:52.875 13:32:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:53.444 13:32:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:53.444 13:32:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.444 13:32:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:53.444 [2024-11-20 13:32:52.745946] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:53.444 [2024-11-20 13:32:52.745985] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:53.444 [2024-11-20 13:32:52.748686] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:53.444 [2024-11-20 13:32:52.748875] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:53.444 [2024-11-20 13:32:52.748924] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:53.444 [2024-11-20 13:32:52.748940] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:12:53.444 { 00:12:53.444 "results": [ 00:12:53.444 { 00:12:53.444 "job": "raid_bdev1", 00:12:53.444 "core_mask": "0x1", 00:12:53.444 "workload": "randrw", 00:12:53.444 "percentage": 50, 00:12:53.444 "status": "finished", 00:12:53.444 "queue_depth": 1, 00:12:53.444 "io_size": 131072, 00:12:53.444 "runtime": 1.378754, 00:12:53.444 "iops": 16350.995173903393, 00:12:53.444 "mibps": 2043.8743967379241, 00:12:53.444 "io_failed": 1, 00:12:53.444 "io_timeout": 0, 00:12:53.444 "avg_latency_us": 84.23809373666766, 00:12:53.444 "min_latency_us": 27.142168674698794, 00:12:53.444 "max_latency_us": 1506.8016064257029 00:12:53.444 } 00:12:53.444 ], 00:12:53.444 "core_count": 1 00:12:53.444 } 00:12:53.444 13:32:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.444 13:32:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 61347 00:12:53.444 13:32:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 61347 ']' 00:12:53.444 13:32:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 61347 00:12:53.444 13:32:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:12:53.444 13:32:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:53.444 13:32:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61347 00:12:53.444 killing process with pid 61347 00:12:53.444 13:32:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:53.444 13:32:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:53.444 13:32:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61347' 00:12:53.444 13:32:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 61347 00:12:53.444 [2024-11-20 13:32:52.800650] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:53.444 13:32:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 61347 00:12:53.704 [2024-11-20 13:32:52.940129] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:55.082 13:32:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.OYidzG6aMf 00:12:55.082 13:32:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:12:55.082 13:32:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:12:55.082 13:32:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:12:55.082 13:32:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:12:55.082 13:32:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:55.082 13:32:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:12:55.082 13:32:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:12:55.082 00:12:55.082 real 0m4.381s 00:12:55.082 user 0m5.206s 00:12:55.082 sys 0m0.615s 00:12:55.082 13:32:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:55.082 13:32:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.082 ************************************ 00:12:55.082 END TEST raid_write_error_test 00:12:55.082 ************************************ 00:12:55.082 13:32:54 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:12:55.082 13:32:54 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 2 false 00:12:55.082 13:32:54 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:12:55.082 13:32:54 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:55.082 13:32:54 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:55.082 ************************************ 00:12:55.082 START TEST raid_state_function_test 00:12:55.082 ************************************ 00:12:55.082 13:32:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 2 false 00:12:55.082 13:32:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:12:55.082 13:32:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:12:55.082 13:32:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:12:55.082 13:32:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:12:55.082 13:32:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:12:55.082 13:32:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:55.082 13:32:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:12:55.082 13:32:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:55.082 13:32:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:55.082 13:32:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:12:55.082 13:32:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:55.082 13:32:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:55.082 13:32:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:12:55.082 13:32:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:12:55.082 13:32:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:12:55.082 13:32:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:12:55.082 13:32:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:12:55.082 13:32:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:12:55.082 13:32:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:12:55.082 13:32:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:12:55.082 13:32:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:12:55.082 13:32:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:12:55.082 13:32:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:12:55.082 Process raid pid: 61491 00:12:55.082 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:55.082 13:32:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=61491 00:12:55.082 13:32:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:12:55.083 13:32:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 61491' 00:12:55.083 13:32:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 61491 00:12:55.083 13:32:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 61491 ']' 00:12:55.083 13:32:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:55.083 13:32:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:55.083 13:32:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:55.083 13:32:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:55.083 13:32:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.083 [2024-11-20 13:32:54.347134] Starting SPDK v25.01-pre git sha1 82b85d9ca / DPDK 24.03.0 initialization... 00:12:55.083 [2024-11-20 13:32:54.347487] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:55.083 [2024-11-20 13:32:54.529280] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:55.341 [2024-11-20 13:32:54.649104] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:55.600 [2024-11-20 13:32:54.862003] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:55.600 [2024-11-20 13:32:54.862221] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:55.859 13:32:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:55.859 13:32:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:12:55.859 13:32:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:12:55.859 13:32:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.859 13:32:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.859 [2024-11-20 13:32:55.195604] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:55.859 [2024-11-20 13:32:55.195661] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:55.860 [2024-11-20 13:32:55.195673] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:55.860 [2024-11-20 13:32:55.195686] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:55.860 13:32:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.860 13:32:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:12:55.860 13:32:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:55.860 13:32:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:55.860 13:32:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:55.860 13:32:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:55.860 13:32:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:55.860 13:32:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:55.860 13:32:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:55.860 13:32:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:55.860 13:32:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:55.860 13:32:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:55.860 13:32:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:55.860 13:32:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.860 13:32:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.860 13:32:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.860 13:32:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:55.860 "name": "Existed_Raid", 00:12:55.860 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:55.860 "strip_size_kb": 64, 00:12:55.860 "state": "configuring", 00:12:55.860 "raid_level": "concat", 00:12:55.860 "superblock": false, 00:12:55.860 "num_base_bdevs": 2, 00:12:55.860 "num_base_bdevs_discovered": 0, 00:12:55.860 "num_base_bdevs_operational": 2, 00:12:55.860 "base_bdevs_list": [ 00:12:55.860 { 00:12:55.860 "name": "BaseBdev1", 00:12:55.860 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:55.860 "is_configured": false, 00:12:55.860 "data_offset": 0, 00:12:55.860 "data_size": 0 00:12:55.860 }, 00:12:55.860 { 00:12:55.860 "name": "BaseBdev2", 00:12:55.860 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:55.860 "is_configured": false, 00:12:55.860 "data_offset": 0, 00:12:55.860 "data_size": 0 00:12:55.860 } 00:12:55.860 ] 00:12:55.860 }' 00:12:55.860 13:32:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:55.860 13:32:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.428 13:32:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:56.428 13:32:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.428 13:32:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.428 [2024-11-20 13:32:55.638978] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:56.428 [2024-11-20 13:32:55.639016] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:12:56.428 13:32:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.428 13:32:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:12:56.428 13:32:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.428 13:32:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.428 [2024-11-20 13:32:55.650929] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:56.428 [2024-11-20 13:32:55.651111] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:56.428 [2024-11-20 13:32:55.651133] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:56.428 [2024-11-20 13:32:55.651152] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:56.428 13:32:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.428 13:32:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:56.428 13:32:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.428 13:32:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.428 [2024-11-20 13:32:55.701222] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:56.428 BaseBdev1 00:12:56.428 13:32:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.428 13:32:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:12:56.428 13:32:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:12:56.428 13:32:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:56.428 13:32:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:56.428 13:32:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:56.428 13:32:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:56.428 13:32:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:56.428 13:32:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.428 13:32:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.428 13:32:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.428 13:32:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:56.428 13:32:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.429 13:32:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.429 [ 00:12:56.429 { 00:12:56.429 "name": "BaseBdev1", 00:12:56.429 "aliases": [ 00:12:56.429 "cef40d9e-b91c-4b21-b7ca-dad75ab052dd" 00:12:56.429 ], 00:12:56.429 "product_name": "Malloc disk", 00:12:56.429 "block_size": 512, 00:12:56.429 "num_blocks": 65536, 00:12:56.429 "uuid": "cef40d9e-b91c-4b21-b7ca-dad75ab052dd", 00:12:56.429 "assigned_rate_limits": { 00:12:56.429 "rw_ios_per_sec": 0, 00:12:56.429 "rw_mbytes_per_sec": 0, 00:12:56.429 "r_mbytes_per_sec": 0, 00:12:56.429 "w_mbytes_per_sec": 0 00:12:56.429 }, 00:12:56.429 "claimed": true, 00:12:56.429 "claim_type": "exclusive_write", 00:12:56.429 "zoned": false, 00:12:56.429 "supported_io_types": { 00:12:56.429 "read": true, 00:12:56.429 "write": true, 00:12:56.429 "unmap": true, 00:12:56.429 "flush": true, 00:12:56.429 "reset": true, 00:12:56.429 "nvme_admin": false, 00:12:56.429 "nvme_io": false, 00:12:56.429 "nvme_io_md": false, 00:12:56.429 "write_zeroes": true, 00:12:56.429 "zcopy": true, 00:12:56.429 "get_zone_info": false, 00:12:56.429 "zone_management": false, 00:12:56.429 "zone_append": false, 00:12:56.429 "compare": false, 00:12:56.429 "compare_and_write": false, 00:12:56.429 "abort": true, 00:12:56.429 "seek_hole": false, 00:12:56.429 "seek_data": false, 00:12:56.429 "copy": true, 00:12:56.429 "nvme_iov_md": false 00:12:56.429 }, 00:12:56.429 "memory_domains": [ 00:12:56.429 { 00:12:56.429 "dma_device_id": "system", 00:12:56.429 "dma_device_type": 1 00:12:56.429 }, 00:12:56.429 { 00:12:56.429 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:56.429 "dma_device_type": 2 00:12:56.429 } 00:12:56.429 ], 00:12:56.429 "driver_specific": {} 00:12:56.429 } 00:12:56.429 ] 00:12:56.429 13:32:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.429 13:32:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:56.429 13:32:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:12:56.429 13:32:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:56.429 13:32:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:56.429 13:32:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:56.429 13:32:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:56.429 13:32:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:56.429 13:32:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:56.429 13:32:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:56.429 13:32:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:56.429 13:32:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:56.429 13:32:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:56.429 13:32:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.429 13:32:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.429 13:32:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:56.429 13:32:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.429 13:32:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:56.429 "name": "Existed_Raid", 00:12:56.429 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:56.429 "strip_size_kb": 64, 00:12:56.429 "state": "configuring", 00:12:56.429 "raid_level": "concat", 00:12:56.429 "superblock": false, 00:12:56.429 "num_base_bdevs": 2, 00:12:56.429 "num_base_bdevs_discovered": 1, 00:12:56.429 "num_base_bdevs_operational": 2, 00:12:56.429 "base_bdevs_list": [ 00:12:56.429 { 00:12:56.429 "name": "BaseBdev1", 00:12:56.429 "uuid": "cef40d9e-b91c-4b21-b7ca-dad75ab052dd", 00:12:56.429 "is_configured": true, 00:12:56.429 "data_offset": 0, 00:12:56.429 "data_size": 65536 00:12:56.429 }, 00:12:56.429 { 00:12:56.429 "name": "BaseBdev2", 00:12:56.429 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:56.429 "is_configured": false, 00:12:56.429 "data_offset": 0, 00:12:56.429 "data_size": 0 00:12:56.429 } 00:12:56.429 ] 00:12:56.429 }' 00:12:56.429 13:32:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:56.429 13:32:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.692 13:32:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:56.692 13:32:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.692 13:32:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.958 [2024-11-20 13:32:56.181228] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:56.958 [2024-11-20 13:32:56.181416] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:12:56.958 13:32:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.958 13:32:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:12:56.958 13:32:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.958 13:32:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.958 [2024-11-20 13:32:56.193237] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:56.958 [2024-11-20 13:32:56.195354] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:56.958 [2024-11-20 13:32:56.195519] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:56.958 13:32:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.958 13:32:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:12:56.958 13:32:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:56.958 13:32:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:12:56.958 13:32:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:56.958 13:32:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:56.958 13:32:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:56.958 13:32:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:56.958 13:32:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:56.958 13:32:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:56.958 13:32:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:56.958 13:32:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:56.958 13:32:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:56.958 13:32:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:56.958 13:32:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:56.958 13:32:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.958 13:32:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.958 13:32:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.958 13:32:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:56.958 "name": "Existed_Raid", 00:12:56.958 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:56.958 "strip_size_kb": 64, 00:12:56.958 "state": "configuring", 00:12:56.958 "raid_level": "concat", 00:12:56.958 "superblock": false, 00:12:56.958 "num_base_bdevs": 2, 00:12:56.958 "num_base_bdevs_discovered": 1, 00:12:56.958 "num_base_bdevs_operational": 2, 00:12:56.958 "base_bdevs_list": [ 00:12:56.958 { 00:12:56.958 "name": "BaseBdev1", 00:12:56.958 "uuid": "cef40d9e-b91c-4b21-b7ca-dad75ab052dd", 00:12:56.958 "is_configured": true, 00:12:56.958 "data_offset": 0, 00:12:56.958 "data_size": 65536 00:12:56.958 }, 00:12:56.958 { 00:12:56.958 "name": "BaseBdev2", 00:12:56.959 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:56.959 "is_configured": false, 00:12:56.959 "data_offset": 0, 00:12:56.959 "data_size": 0 00:12:56.959 } 00:12:56.959 ] 00:12:56.959 }' 00:12:56.959 13:32:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:56.959 13:32:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.218 13:32:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:57.218 13:32:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.218 13:32:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.218 [2024-11-20 13:32:56.649991] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:57.218 [2024-11-20 13:32:56.650043] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:57.218 [2024-11-20 13:32:56.650054] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:12:57.218 [2024-11-20 13:32:56.650400] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:12:57.218 [2024-11-20 13:32:56.650570] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:57.218 [2024-11-20 13:32:56.650585] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:12:57.218 [2024-11-20 13:32:56.650877] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:57.218 BaseBdev2 00:12:57.218 13:32:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.218 13:32:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:12:57.218 13:32:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:12:57.218 13:32:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:57.218 13:32:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:57.218 13:32:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:57.218 13:32:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:57.218 13:32:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:57.218 13:32:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.218 13:32:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.218 13:32:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.218 13:32:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:57.218 13:32:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.219 13:32:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.219 [ 00:12:57.219 { 00:12:57.219 "name": "BaseBdev2", 00:12:57.219 "aliases": [ 00:12:57.219 "45478f8d-35f7-4ecc-bc4d-f75580583d26" 00:12:57.219 ], 00:12:57.219 "product_name": "Malloc disk", 00:12:57.219 "block_size": 512, 00:12:57.219 "num_blocks": 65536, 00:12:57.219 "uuid": "45478f8d-35f7-4ecc-bc4d-f75580583d26", 00:12:57.219 "assigned_rate_limits": { 00:12:57.219 "rw_ios_per_sec": 0, 00:12:57.219 "rw_mbytes_per_sec": 0, 00:12:57.219 "r_mbytes_per_sec": 0, 00:12:57.219 "w_mbytes_per_sec": 0 00:12:57.219 }, 00:12:57.219 "claimed": true, 00:12:57.219 "claim_type": "exclusive_write", 00:12:57.219 "zoned": false, 00:12:57.219 "supported_io_types": { 00:12:57.219 "read": true, 00:12:57.219 "write": true, 00:12:57.219 "unmap": true, 00:12:57.219 "flush": true, 00:12:57.219 "reset": true, 00:12:57.219 "nvme_admin": false, 00:12:57.219 "nvme_io": false, 00:12:57.219 "nvme_io_md": false, 00:12:57.219 "write_zeroes": true, 00:12:57.219 "zcopy": true, 00:12:57.219 "get_zone_info": false, 00:12:57.219 "zone_management": false, 00:12:57.219 "zone_append": false, 00:12:57.219 "compare": false, 00:12:57.219 "compare_and_write": false, 00:12:57.219 "abort": true, 00:12:57.219 "seek_hole": false, 00:12:57.219 "seek_data": false, 00:12:57.219 "copy": true, 00:12:57.219 "nvme_iov_md": false 00:12:57.219 }, 00:12:57.219 "memory_domains": [ 00:12:57.219 { 00:12:57.219 "dma_device_id": "system", 00:12:57.219 "dma_device_type": 1 00:12:57.219 }, 00:12:57.219 { 00:12:57.219 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:57.219 "dma_device_type": 2 00:12:57.219 } 00:12:57.219 ], 00:12:57.219 "driver_specific": {} 00:12:57.219 } 00:12:57.219 ] 00:12:57.219 13:32:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.219 13:32:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:57.219 13:32:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:57.219 13:32:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:57.219 13:32:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:12:57.219 13:32:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:57.219 13:32:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:57.219 13:32:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:57.219 13:32:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:57.219 13:32:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:57.219 13:32:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:57.219 13:32:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:57.219 13:32:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:57.219 13:32:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:57.219 13:32:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:57.219 13:32:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.219 13:32:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.219 13:32:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:57.478 13:32:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.478 13:32:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:57.478 "name": "Existed_Raid", 00:12:57.478 "uuid": "a2922d1c-5944-4c0d-8ea0-f722a5d3a6f0", 00:12:57.478 "strip_size_kb": 64, 00:12:57.478 "state": "online", 00:12:57.478 "raid_level": "concat", 00:12:57.478 "superblock": false, 00:12:57.478 "num_base_bdevs": 2, 00:12:57.478 "num_base_bdevs_discovered": 2, 00:12:57.478 "num_base_bdevs_operational": 2, 00:12:57.478 "base_bdevs_list": [ 00:12:57.478 { 00:12:57.478 "name": "BaseBdev1", 00:12:57.478 "uuid": "cef40d9e-b91c-4b21-b7ca-dad75ab052dd", 00:12:57.478 "is_configured": true, 00:12:57.478 "data_offset": 0, 00:12:57.478 "data_size": 65536 00:12:57.478 }, 00:12:57.478 { 00:12:57.478 "name": "BaseBdev2", 00:12:57.478 "uuid": "45478f8d-35f7-4ecc-bc4d-f75580583d26", 00:12:57.478 "is_configured": true, 00:12:57.478 "data_offset": 0, 00:12:57.478 "data_size": 65536 00:12:57.478 } 00:12:57.478 ] 00:12:57.478 }' 00:12:57.478 13:32:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:57.478 13:32:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.737 13:32:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:12:57.737 13:32:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:57.737 13:32:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:57.737 13:32:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:57.738 13:32:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:57.738 13:32:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:57.738 13:32:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:57.738 13:32:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:57.738 13:32:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.738 13:32:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.738 [2024-11-20 13:32:57.137645] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:57.738 13:32:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.738 13:32:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:57.738 "name": "Existed_Raid", 00:12:57.738 "aliases": [ 00:12:57.738 "a2922d1c-5944-4c0d-8ea0-f722a5d3a6f0" 00:12:57.738 ], 00:12:57.738 "product_name": "Raid Volume", 00:12:57.738 "block_size": 512, 00:12:57.738 "num_blocks": 131072, 00:12:57.738 "uuid": "a2922d1c-5944-4c0d-8ea0-f722a5d3a6f0", 00:12:57.738 "assigned_rate_limits": { 00:12:57.738 "rw_ios_per_sec": 0, 00:12:57.738 "rw_mbytes_per_sec": 0, 00:12:57.738 "r_mbytes_per_sec": 0, 00:12:57.738 "w_mbytes_per_sec": 0 00:12:57.738 }, 00:12:57.738 "claimed": false, 00:12:57.738 "zoned": false, 00:12:57.738 "supported_io_types": { 00:12:57.738 "read": true, 00:12:57.738 "write": true, 00:12:57.738 "unmap": true, 00:12:57.738 "flush": true, 00:12:57.738 "reset": true, 00:12:57.738 "nvme_admin": false, 00:12:57.738 "nvme_io": false, 00:12:57.738 "nvme_io_md": false, 00:12:57.738 "write_zeroes": true, 00:12:57.738 "zcopy": false, 00:12:57.738 "get_zone_info": false, 00:12:57.738 "zone_management": false, 00:12:57.738 "zone_append": false, 00:12:57.738 "compare": false, 00:12:57.738 "compare_and_write": false, 00:12:57.738 "abort": false, 00:12:57.738 "seek_hole": false, 00:12:57.738 "seek_data": false, 00:12:57.738 "copy": false, 00:12:57.738 "nvme_iov_md": false 00:12:57.738 }, 00:12:57.738 "memory_domains": [ 00:12:57.738 { 00:12:57.738 "dma_device_id": "system", 00:12:57.738 "dma_device_type": 1 00:12:57.738 }, 00:12:57.738 { 00:12:57.738 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:57.738 "dma_device_type": 2 00:12:57.738 }, 00:12:57.738 { 00:12:57.738 "dma_device_id": "system", 00:12:57.738 "dma_device_type": 1 00:12:57.738 }, 00:12:57.738 { 00:12:57.738 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:57.738 "dma_device_type": 2 00:12:57.738 } 00:12:57.738 ], 00:12:57.738 "driver_specific": { 00:12:57.738 "raid": { 00:12:57.738 "uuid": "a2922d1c-5944-4c0d-8ea0-f722a5d3a6f0", 00:12:57.738 "strip_size_kb": 64, 00:12:57.738 "state": "online", 00:12:57.738 "raid_level": "concat", 00:12:57.738 "superblock": false, 00:12:57.738 "num_base_bdevs": 2, 00:12:57.738 "num_base_bdevs_discovered": 2, 00:12:57.738 "num_base_bdevs_operational": 2, 00:12:57.738 "base_bdevs_list": [ 00:12:57.738 { 00:12:57.738 "name": "BaseBdev1", 00:12:57.738 "uuid": "cef40d9e-b91c-4b21-b7ca-dad75ab052dd", 00:12:57.738 "is_configured": true, 00:12:57.738 "data_offset": 0, 00:12:57.738 "data_size": 65536 00:12:57.738 }, 00:12:57.738 { 00:12:57.738 "name": "BaseBdev2", 00:12:57.738 "uuid": "45478f8d-35f7-4ecc-bc4d-f75580583d26", 00:12:57.738 "is_configured": true, 00:12:57.738 "data_offset": 0, 00:12:57.738 "data_size": 65536 00:12:57.738 } 00:12:57.738 ] 00:12:57.738 } 00:12:57.738 } 00:12:57.738 }' 00:12:57.738 13:32:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:57.738 13:32:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:12:57.738 BaseBdev2' 00:12:57.738 13:32:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:57.997 13:32:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:57.998 13:32:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:57.998 13:32:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:12:57.998 13:32:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.998 13:32:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.998 13:32:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:57.998 13:32:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.998 13:32:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:57.998 13:32:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:57.998 13:32:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:57.998 13:32:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:57.998 13:32:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:57.998 13:32:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.998 13:32:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.998 13:32:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.998 13:32:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:57.998 13:32:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:57.998 13:32:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:57.998 13:32:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.998 13:32:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.998 [2024-11-20 13:32:57.341165] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:57.998 [2024-11-20 13:32:57.341201] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:57.998 [2024-11-20 13:32:57.341250] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:57.998 13:32:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.998 13:32:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:12:57.998 13:32:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:12:57.998 13:32:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:57.998 13:32:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:12:57.998 13:32:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:12:57.998 13:32:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:12:57.998 13:32:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:57.998 13:32:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:12:57.998 13:32:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:57.998 13:32:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:57.998 13:32:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:57.998 13:32:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:57.998 13:32:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:57.998 13:32:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:57.998 13:32:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:57.998 13:32:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:57.998 13:32:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.998 13:32:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.998 13:32:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:57.998 13:32:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.257 13:32:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:58.257 "name": "Existed_Raid", 00:12:58.257 "uuid": "a2922d1c-5944-4c0d-8ea0-f722a5d3a6f0", 00:12:58.257 "strip_size_kb": 64, 00:12:58.257 "state": "offline", 00:12:58.257 "raid_level": "concat", 00:12:58.257 "superblock": false, 00:12:58.257 "num_base_bdevs": 2, 00:12:58.257 "num_base_bdevs_discovered": 1, 00:12:58.257 "num_base_bdevs_operational": 1, 00:12:58.257 "base_bdevs_list": [ 00:12:58.257 { 00:12:58.257 "name": null, 00:12:58.257 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:58.257 "is_configured": false, 00:12:58.257 "data_offset": 0, 00:12:58.257 "data_size": 65536 00:12:58.257 }, 00:12:58.257 { 00:12:58.257 "name": "BaseBdev2", 00:12:58.257 "uuid": "45478f8d-35f7-4ecc-bc4d-f75580583d26", 00:12:58.257 "is_configured": true, 00:12:58.257 "data_offset": 0, 00:12:58.257 "data_size": 65536 00:12:58.257 } 00:12:58.257 ] 00:12:58.257 }' 00:12:58.257 13:32:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:58.257 13:32:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:58.516 13:32:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:12:58.516 13:32:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:58.516 13:32:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:58.516 13:32:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.516 13:32:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:58.516 13:32:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:58.516 13:32:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.516 13:32:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:58.516 13:32:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:58.516 13:32:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:12:58.516 13:32:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.516 13:32:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:58.516 [2024-11-20 13:32:57.970841] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:58.516 [2024-11-20 13:32:57.971047] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:12:58.774 13:32:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.774 13:32:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:58.774 13:32:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:58.774 13:32:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:12:58.774 13:32:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:58.774 13:32:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.774 13:32:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:58.774 13:32:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.774 13:32:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:12:58.774 13:32:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:12:58.774 13:32:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:12:58.774 13:32:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 61491 00:12:58.774 13:32:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 61491 ']' 00:12:58.774 13:32:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 61491 00:12:58.774 13:32:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:12:58.774 13:32:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:58.774 13:32:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61491 00:12:58.774 killing process with pid 61491 00:12:58.774 13:32:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:58.774 13:32:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:58.774 13:32:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61491' 00:12:58.774 13:32:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 61491 00:12:58.774 [2024-11-20 13:32:58.177908] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:58.774 13:32:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 61491 00:12:58.774 [2024-11-20 13:32:58.194328] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:00.148 ************************************ 00:13:00.148 END TEST raid_state_function_test 00:13:00.148 ************************************ 00:13:00.148 13:32:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:13:00.148 00:13:00.148 real 0m5.105s 00:13:00.148 user 0m7.365s 00:13:00.148 sys 0m0.853s 00:13:00.148 13:32:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:00.148 13:32:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:00.148 13:32:59 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 2 true 00:13:00.148 13:32:59 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:13:00.148 13:32:59 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:00.148 13:32:59 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:00.148 ************************************ 00:13:00.148 START TEST raid_state_function_test_sb 00:13:00.148 ************************************ 00:13:00.148 13:32:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 2 true 00:13:00.148 13:32:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:13:00.148 13:32:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:13:00.148 13:32:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:13:00.148 13:32:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:13:00.148 13:32:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:13:00.148 13:32:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:00.148 13:32:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:13:00.148 13:32:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:00.148 13:32:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:00.148 13:32:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:13:00.148 13:32:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:00.148 13:32:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:00.148 13:32:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:13:00.148 13:32:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:13:00.148 13:32:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:13:00.148 13:32:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:13:00.148 13:32:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:13:00.148 13:32:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:13:00.148 13:32:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:13:00.148 13:32:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:13:00.148 13:32:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:13:00.148 13:32:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:13:00.148 13:32:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:13:00.148 Process raid pid: 61744 00:13:00.149 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:00.149 13:32:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=61744 00:13:00.149 13:32:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 61744' 00:13:00.149 13:32:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:13:00.149 13:32:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 61744 00:13:00.149 13:32:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 61744 ']' 00:13:00.149 13:32:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:00.149 13:32:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:00.149 13:32:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:00.149 13:32:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:00.149 13:32:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:00.149 [2024-11-20 13:32:59.483629] Starting SPDK v25.01-pre git sha1 82b85d9ca / DPDK 24.03.0 initialization... 00:13:00.149 [2024-11-20 13:32:59.484032] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:00.407 [2024-11-20 13:32:59.673984] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:00.407 [2024-11-20 13:32:59.808355] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:00.665 [2024-11-20 13:33:00.020390] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:00.665 [2024-11-20 13:33:00.020643] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:00.924 13:33:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:00.924 13:33:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:13:00.924 13:33:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:13:00.924 13:33:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.924 13:33:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:00.924 [2024-11-20 13:33:00.391531] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:00.924 [2024-11-20 13:33:00.391585] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:00.924 [2024-11-20 13:33:00.391597] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:00.924 [2024-11-20 13:33:00.391610] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:00.924 13:33:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.924 13:33:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:13:00.924 13:33:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:00.924 13:33:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:00.924 13:33:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:00.924 13:33:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:00.924 13:33:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:00.924 13:33:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:00.924 13:33:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:00.924 13:33:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:00.924 13:33:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:00.924 13:33:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:00.924 13:33:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:00.924 13:33:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.924 13:33:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:01.183 13:33:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.183 13:33:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:01.183 "name": "Existed_Raid", 00:13:01.183 "uuid": "0d896749-90d0-4feb-a4b0-5ba4531e1995", 00:13:01.183 "strip_size_kb": 64, 00:13:01.183 "state": "configuring", 00:13:01.183 "raid_level": "concat", 00:13:01.183 "superblock": true, 00:13:01.183 "num_base_bdevs": 2, 00:13:01.183 "num_base_bdevs_discovered": 0, 00:13:01.183 "num_base_bdevs_operational": 2, 00:13:01.183 "base_bdevs_list": [ 00:13:01.183 { 00:13:01.183 "name": "BaseBdev1", 00:13:01.183 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:01.183 "is_configured": false, 00:13:01.183 "data_offset": 0, 00:13:01.183 "data_size": 0 00:13:01.183 }, 00:13:01.183 { 00:13:01.183 "name": "BaseBdev2", 00:13:01.183 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:01.183 "is_configured": false, 00:13:01.183 "data_offset": 0, 00:13:01.183 "data_size": 0 00:13:01.183 } 00:13:01.183 ] 00:13:01.183 }' 00:13:01.183 13:33:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:01.183 13:33:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:01.442 13:33:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:01.442 13:33:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.442 13:33:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:01.442 [2024-11-20 13:33:00.810935] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:01.442 [2024-11-20 13:33:00.810972] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:13:01.442 13:33:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.442 13:33:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:13:01.442 13:33:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.442 13:33:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:01.442 [2024-11-20 13:33:00.822909] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:01.442 [2024-11-20 13:33:00.822955] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:01.442 [2024-11-20 13:33:00.822966] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:01.442 [2024-11-20 13:33:00.822981] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:01.442 13:33:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.442 13:33:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:01.442 13:33:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.442 13:33:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:01.442 [2024-11-20 13:33:00.870812] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:01.442 BaseBdev1 00:13:01.442 13:33:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.442 13:33:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:13:01.442 13:33:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:13:01.442 13:33:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:01.442 13:33:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:01.442 13:33:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:01.442 13:33:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:01.442 13:33:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:01.442 13:33:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.442 13:33:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:01.442 13:33:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.442 13:33:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:01.442 13:33:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.442 13:33:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:01.442 [ 00:13:01.442 { 00:13:01.442 "name": "BaseBdev1", 00:13:01.442 "aliases": [ 00:13:01.442 "cc34c055-866a-4601-b589-7a33455fb5ee" 00:13:01.442 ], 00:13:01.442 "product_name": "Malloc disk", 00:13:01.442 "block_size": 512, 00:13:01.442 "num_blocks": 65536, 00:13:01.443 "uuid": "cc34c055-866a-4601-b589-7a33455fb5ee", 00:13:01.443 "assigned_rate_limits": { 00:13:01.443 "rw_ios_per_sec": 0, 00:13:01.443 "rw_mbytes_per_sec": 0, 00:13:01.443 "r_mbytes_per_sec": 0, 00:13:01.443 "w_mbytes_per_sec": 0 00:13:01.443 }, 00:13:01.443 "claimed": true, 00:13:01.443 "claim_type": "exclusive_write", 00:13:01.443 "zoned": false, 00:13:01.443 "supported_io_types": { 00:13:01.443 "read": true, 00:13:01.443 "write": true, 00:13:01.443 "unmap": true, 00:13:01.443 "flush": true, 00:13:01.443 "reset": true, 00:13:01.443 "nvme_admin": false, 00:13:01.443 "nvme_io": false, 00:13:01.443 "nvme_io_md": false, 00:13:01.443 "write_zeroes": true, 00:13:01.443 "zcopy": true, 00:13:01.443 "get_zone_info": false, 00:13:01.443 "zone_management": false, 00:13:01.443 "zone_append": false, 00:13:01.443 "compare": false, 00:13:01.443 "compare_and_write": false, 00:13:01.443 "abort": true, 00:13:01.443 "seek_hole": false, 00:13:01.443 "seek_data": false, 00:13:01.443 "copy": true, 00:13:01.443 "nvme_iov_md": false 00:13:01.443 }, 00:13:01.443 "memory_domains": [ 00:13:01.443 { 00:13:01.443 "dma_device_id": "system", 00:13:01.443 "dma_device_type": 1 00:13:01.443 }, 00:13:01.443 { 00:13:01.443 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:01.443 "dma_device_type": 2 00:13:01.443 } 00:13:01.443 ], 00:13:01.443 "driver_specific": {} 00:13:01.443 } 00:13:01.443 ] 00:13:01.443 13:33:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.443 13:33:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:01.443 13:33:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:13:01.443 13:33:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:01.443 13:33:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:01.443 13:33:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:01.443 13:33:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:01.443 13:33:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:01.443 13:33:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:01.443 13:33:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:01.443 13:33:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:01.443 13:33:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:01.443 13:33:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:01.443 13:33:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:01.443 13:33:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.443 13:33:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:01.732 13:33:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.732 13:33:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:01.732 "name": "Existed_Raid", 00:13:01.732 "uuid": "ce5f024c-1cd1-4da9-becc-cc2070e6f08f", 00:13:01.732 "strip_size_kb": 64, 00:13:01.732 "state": "configuring", 00:13:01.732 "raid_level": "concat", 00:13:01.732 "superblock": true, 00:13:01.732 "num_base_bdevs": 2, 00:13:01.732 "num_base_bdevs_discovered": 1, 00:13:01.732 "num_base_bdevs_operational": 2, 00:13:01.732 "base_bdevs_list": [ 00:13:01.732 { 00:13:01.732 "name": "BaseBdev1", 00:13:01.732 "uuid": "cc34c055-866a-4601-b589-7a33455fb5ee", 00:13:01.732 "is_configured": true, 00:13:01.732 "data_offset": 2048, 00:13:01.732 "data_size": 63488 00:13:01.732 }, 00:13:01.732 { 00:13:01.732 "name": "BaseBdev2", 00:13:01.732 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:01.732 "is_configured": false, 00:13:01.732 "data_offset": 0, 00:13:01.732 "data_size": 0 00:13:01.732 } 00:13:01.732 ] 00:13:01.732 }' 00:13:01.732 13:33:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:01.732 13:33:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:01.991 13:33:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:01.991 13:33:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.991 13:33:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:01.991 [2024-11-20 13:33:01.342197] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:01.991 [2024-11-20 13:33:01.342251] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:13:01.991 13:33:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.991 13:33:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:13:01.991 13:33:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.991 13:33:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:01.991 [2024-11-20 13:33:01.354292] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:01.991 [2024-11-20 13:33:01.356571] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:01.991 [2024-11-20 13:33:01.356715] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:01.991 13:33:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.991 13:33:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:13:01.991 13:33:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:01.991 13:33:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:13:01.991 13:33:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:01.991 13:33:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:01.991 13:33:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:01.991 13:33:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:01.991 13:33:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:01.991 13:33:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:01.991 13:33:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:01.992 13:33:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:01.992 13:33:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:01.992 13:33:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:01.992 13:33:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.992 13:33:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:01.992 13:33:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:01.992 13:33:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.992 13:33:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:01.992 "name": "Existed_Raid", 00:13:01.992 "uuid": "d328f4ff-b9b7-4380-a7bc-ee34bab4eb0a", 00:13:01.992 "strip_size_kb": 64, 00:13:01.992 "state": "configuring", 00:13:01.992 "raid_level": "concat", 00:13:01.992 "superblock": true, 00:13:01.992 "num_base_bdevs": 2, 00:13:01.992 "num_base_bdevs_discovered": 1, 00:13:01.992 "num_base_bdevs_operational": 2, 00:13:01.992 "base_bdevs_list": [ 00:13:01.992 { 00:13:01.992 "name": "BaseBdev1", 00:13:01.992 "uuid": "cc34c055-866a-4601-b589-7a33455fb5ee", 00:13:01.992 "is_configured": true, 00:13:01.992 "data_offset": 2048, 00:13:01.992 "data_size": 63488 00:13:01.992 }, 00:13:01.992 { 00:13:01.992 "name": "BaseBdev2", 00:13:01.992 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:01.992 "is_configured": false, 00:13:01.992 "data_offset": 0, 00:13:01.992 "data_size": 0 00:13:01.992 } 00:13:01.992 ] 00:13:01.992 }' 00:13:01.992 13:33:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:01.992 13:33:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:02.560 13:33:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:02.560 13:33:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.560 13:33:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:02.560 [2024-11-20 13:33:01.792149] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:02.560 [2024-11-20 13:33:01.792385] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:13:02.560 [2024-11-20 13:33:01.792400] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:13:02.560 [2024-11-20 13:33:01.792664] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:13:02.560 [2024-11-20 13:33:01.792818] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:13:02.560 [2024-11-20 13:33:01.792841] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:13:02.560 [2024-11-20 13:33:01.792977] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:02.560 BaseBdev2 00:13:02.560 13:33:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.560 13:33:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:13:02.560 13:33:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:13:02.560 13:33:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:02.560 13:33:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:02.560 13:33:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:02.560 13:33:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:02.560 13:33:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:02.560 13:33:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.560 13:33:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:02.560 13:33:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.560 13:33:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:02.560 13:33:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.560 13:33:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:02.560 [ 00:13:02.560 { 00:13:02.560 "name": "BaseBdev2", 00:13:02.560 "aliases": [ 00:13:02.560 "3834959d-a131-4851-bf96-da212419a6a3" 00:13:02.560 ], 00:13:02.560 "product_name": "Malloc disk", 00:13:02.560 "block_size": 512, 00:13:02.560 "num_blocks": 65536, 00:13:02.560 "uuid": "3834959d-a131-4851-bf96-da212419a6a3", 00:13:02.560 "assigned_rate_limits": { 00:13:02.560 "rw_ios_per_sec": 0, 00:13:02.560 "rw_mbytes_per_sec": 0, 00:13:02.560 "r_mbytes_per_sec": 0, 00:13:02.560 "w_mbytes_per_sec": 0 00:13:02.560 }, 00:13:02.560 "claimed": true, 00:13:02.560 "claim_type": "exclusive_write", 00:13:02.560 "zoned": false, 00:13:02.560 "supported_io_types": { 00:13:02.560 "read": true, 00:13:02.560 "write": true, 00:13:02.560 "unmap": true, 00:13:02.560 "flush": true, 00:13:02.560 "reset": true, 00:13:02.560 "nvme_admin": false, 00:13:02.560 "nvme_io": false, 00:13:02.560 "nvme_io_md": false, 00:13:02.560 "write_zeroes": true, 00:13:02.560 "zcopy": true, 00:13:02.560 "get_zone_info": false, 00:13:02.560 "zone_management": false, 00:13:02.560 "zone_append": false, 00:13:02.560 "compare": false, 00:13:02.560 "compare_and_write": false, 00:13:02.560 "abort": true, 00:13:02.560 "seek_hole": false, 00:13:02.560 "seek_data": false, 00:13:02.560 "copy": true, 00:13:02.560 "nvme_iov_md": false 00:13:02.560 }, 00:13:02.560 "memory_domains": [ 00:13:02.560 { 00:13:02.560 "dma_device_id": "system", 00:13:02.560 "dma_device_type": 1 00:13:02.560 }, 00:13:02.560 { 00:13:02.560 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:02.560 "dma_device_type": 2 00:13:02.560 } 00:13:02.560 ], 00:13:02.560 "driver_specific": {} 00:13:02.560 } 00:13:02.560 ] 00:13:02.560 13:33:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.560 13:33:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:02.560 13:33:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:02.560 13:33:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:02.560 13:33:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:13:02.560 13:33:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:02.560 13:33:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:02.560 13:33:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:02.560 13:33:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:02.560 13:33:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:02.560 13:33:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:02.560 13:33:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:02.560 13:33:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:02.560 13:33:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:02.560 13:33:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:02.560 13:33:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:02.560 13:33:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.560 13:33:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:02.560 13:33:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.560 13:33:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:02.560 "name": "Existed_Raid", 00:13:02.560 "uuid": "d328f4ff-b9b7-4380-a7bc-ee34bab4eb0a", 00:13:02.560 "strip_size_kb": 64, 00:13:02.560 "state": "online", 00:13:02.560 "raid_level": "concat", 00:13:02.560 "superblock": true, 00:13:02.560 "num_base_bdevs": 2, 00:13:02.560 "num_base_bdevs_discovered": 2, 00:13:02.560 "num_base_bdevs_operational": 2, 00:13:02.560 "base_bdevs_list": [ 00:13:02.560 { 00:13:02.560 "name": "BaseBdev1", 00:13:02.560 "uuid": "cc34c055-866a-4601-b589-7a33455fb5ee", 00:13:02.560 "is_configured": true, 00:13:02.560 "data_offset": 2048, 00:13:02.560 "data_size": 63488 00:13:02.560 }, 00:13:02.560 { 00:13:02.560 "name": "BaseBdev2", 00:13:02.560 "uuid": "3834959d-a131-4851-bf96-da212419a6a3", 00:13:02.560 "is_configured": true, 00:13:02.560 "data_offset": 2048, 00:13:02.560 "data_size": 63488 00:13:02.560 } 00:13:02.560 ] 00:13:02.560 }' 00:13:02.560 13:33:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:02.560 13:33:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:02.820 13:33:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:13:02.820 13:33:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:02.820 13:33:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:02.820 13:33:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:02.820 13:33:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:13:02.820 13:33:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:02.820 13:33:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:02.820 13:33:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:02.820 13:33:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.820 13:33:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:02.820 [2024-11-20 13:33:02.267757] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:02.820 13:33:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.820 13:33:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:02.820 "name": "Existed_Raid", 00:13:02.820 "aliases": [ 00:13:02.820 "d328f4ff-b9b7-4380-a7bc-ee34bab4eb0a" 00:13:02.820 ], 00:13:02.820 "product_name": "Raid Volume", 00:13:02.820 "block_size": 512, 00:13:02.820 "num_blocks": 126976, 00:13:02.820 "uuid": "d328f4ff-b9b7-4380-a7bc-ee34bab4eb0a", 00:13:02.820 "assigned_rate_limits": { 00:13:02.820 "rw_ios_per_sec": 0, 00:13:02.820 "rw_mbytes_per_sec": 0, 00:13:02.820 "r_mbytes_per_sec": 0, 00:13:02.820 "w_mbytes_per_sec": 0 00:13:02.820 }, 00:13:02.820 "claimed": false, 00:13:02.820 "zoned": false, 00:13:02.820 "supported_io_types": { 00:13:02.820 "read": true, 00:13:02.820 "write": true, 00:13:02.820 "unmap": true, 00:13:02.820 "flush": true, 00:13:02.820 "reset": true, 00:13:02.820 "nvme_admin": false, 00:13:02.820 "nvme_io": false, 00:13:02.820 "nvme_io_md": false, 00:13:02.820 "write_zeroes": true, 00:13:02.820 "zcopy": false, 00:13:02.820 "get_zone_info": false, 00:13:02.820 "zone_management": false, 00:13:02.820 "zone_append": false, 00:13:02.820 "compare": false, 00:13:02.820 "compare_and_write": false, 00:13:02.820 "abort": false, 00:13:02.820 "seek_hole": false, 00:13:02.820 "seek_data": false, 00:13:02.820 "copy": false, 00:13:02.820 "nvme_iov_md": false 00:13:02.820 }, 00:13:02.820 "memory_domains": [ 00:13:02.820 { 00:13:02.820 "dma_device_id": "system", 00:13:02.820 "dma_device_type": 1 00:13:02.820 }, 00:13:02.820 { 00:13:02.820 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:02.820 "dma_device_type": 2 00:13:02.820 }, 00:13:02.820 { 00:13:02.820 "dma_device_id": "system", 00:13:02.820 "dma_device_type": 1 00:13:02.820 }, 00:13:02.820 { 00:13:02.820 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:02.820 "dma_device_type": 2 00:13:02.820 } 00:13:02.820 ], 00:13:02.820 "driver_specific": { 00:13:02.820 "raid": { 00:13:02.820 "uuid": "d328f4ff-b9b7-4380-a7bc-ee34bab4eb0a", 00:13:02.820 "strip_size_kb": 64, 00:13:02.820 "state": "online", 00:13:02.820 "raid_level": "concat", 00:13:02.820 "superblock": true, 00:13:02.820 "num_base_bdevs": 2, 00:13:02.820 "num_base_bdevs_discovered": 2, 00:13:02.820 "num_base_bdevs_operational": 2, 00:13:02.820 "base_bdevs_list": [ 00:13:02.820 { 00:13:02.820 "name": "BaseBdev1", 00:13:02.820 "uuid": "cc34c055-866a-4601-b589-7a33455fb5ee", 00:13:02.820 "is_configured": true, 00:13:02.820 "data_offset": 2048, 00:13:02.820 "data_size": 63488 00:13:02.820 }, 00:13:02.820 { 00:13:02.820 "name": "BaseBdev2", 00:13:02.820 "uuid": "3834959d-a131-4851-bf96-da212419a6a3", 00:13:02.820 "is_configured": true, 00:13:02.820 "data_offset": 2048, 00:13:02.820 "data_size": 63488 00:13:02.820 } 00:13:02.821 ] 00:13:02.821 } 00:13:02.821 } 00:13:02.821 }' 00:13:02.821 13:33:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:03.080 13:33:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:13:03.080 BaseBdev2' 00:13:03.080 13:33:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:03.080 13:33:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:03.080 13:33:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:03.080 13:33:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:13:03.080 13:33:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.080 13:33:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:03.080 13:33:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:03.080 13:33:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.080 13:33:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:03.080 13:33:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:03.080 13:33:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:03.080 13:33:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:03.080 13:33:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.080 13:33:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:03.080 13:33:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:03.080 13:33:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.080 13:33:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:03.080 13:33:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:03.080 13:33:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:03.080 13:33:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.080 13:33:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:03.080 [2024-11-20 13:33:02.463255] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:03.080 [2024-11-20 13:33:02.463291] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:03.080 [2024-11-20 13:33:02.463341] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:03.080 13:33:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.080 13:33:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:13:03.080 13:33:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:13:03.080 13:33:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:03.080 13:33:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:13:03.338 13:33:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:13:03.338 13:33:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:13:03.338 13:33:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:03.338 13:33:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:13:03.338 13:33:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:03.338 13:33:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:03.338 13:33:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:03.338 13:33:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:03.338 13:33:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:03.338 13:33:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:03.338 13:33:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:03.338 13:33:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:03.338 13:33:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:03.338 13:33:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.338 13:33:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:03.338 13:33:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.338 13:33:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:03.338 "name": "Existed_Raid", 00:13:03.338 "uuid": "d328f4ff-b9b7-4380-a7bc-ee34bab4eb0a", 00:13:03.338 "strip_size_kb": 64, 00:13:03.338 "state": "offline", 00:13:03.338 "raid_level": "concat", 00:13:03.339 "superblock": true, 00:13:03.339 "num_base_bdevs": 2, 00:13:03.339 "num_base_bdevs_discovered": 1, 00:13:03.339 "num_base_bdevs_operational": 1, 00:13:03.339 "base_bdevs_list": [ 00:13:03.339 { 00:13:03.339 "name": null, 00:13:03.339 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:03.339 "is_configured": false, 00:13:03.339 "data_offset": 0, 00:13:03.339 "data_size": 63488 00:13:03.339 }, 00:13:03.339 { 00:13:03.339 "name": "BaseBdev2", 00:13:03.339 "uuid": "3834959d-a131-4851-bf96-da212419a6a3", 00:13:03.339 "is_configured": true, 00:13:03.339 "data_offset": 2048, 00:13:03.339 "data_size": 63488 00:13:03.339 } 00:13:03.339 ] 00:13:03.339 }' 00:13:03.339 13:33:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:03.339 13:33:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:03.598 13:33:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:13:03.598 13:33:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:03.598 13:33:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:03.598 13:33:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:03.598 13:33:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.598 13:33:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:03.598 13:33:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.598 13:33:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:03.598 13:33:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:03.598 13:33:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:13:03.598 13:33:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.598 13:33:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:03.598 [2024-11-20 13:33:03.000422] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:03.598 [2024-11-20 13:33:03.000478] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:13:03.857 13:33:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.857 13:33:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:03.857 13:33:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:03.857 13:33:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:03.857 13:33:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:13:03.857 13:33:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.857 13:33:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:03.857 13:33:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.857 13:33:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:13:03.857 13:33:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:13:03.857 13:33:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:13:03.857 13:33:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 61744 00:13:03.857 13:33:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 61744 ']' 00:13:03.857 13:33:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 61744 00:13:03.857 13:33:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:13:03.857 13:33:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:03.857 13:33:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61744 00:13:03.857 killing process with pid 61744 00:13:03.857 13:33:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:03.857 13:33:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:03.857 13:33:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61744' 00:13:03.857 13:33:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 61744 00:13:03.857 [2024-11-20 13:33:03.193819] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:03.857 13:33:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 61744 00:13:03.857 [2024-11-20 13:33:03.211386] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:05.234 13:33:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:13:05.234 00:13:05.234 real 0m4.963s 00:13:05.234 user 0m7.103s 00:13:05.234 sys 0m0.891s 00:13:05.234 13:33:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:05.234 13:33:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:05.234 ************************************ 00:13:05.234 END TEST raid_state_function_test_sb 00:13:05.234 ************************************ 00:13:05.234 13:33:04 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 2 00:13:05.234 13:33:04 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:13:05.234 13:33:04 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:05.234 13:33:04 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:05.234 ************************************ 00:13:05.234 START TEST raid_superblock_test 00:13:05.234 ************************************ 00:13:05.234 13:33:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test concat 2 00:13:05.234 13:33:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:13:05.234 13:33:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:13:05.234 13:33:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:13:05.234 13:33:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:13:05.234 13:33:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:13:05.234 13:33:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:13:05.234 13:33:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:13:05.234 13:33:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:13:05.234 13:33:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:13:05.234 13:33:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:13:05.234 13:33:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:13:05.234 13:33:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:13:05.235 13:33:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:13:05.235 13:33:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:13:05.235 13:33:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:13:05.235 13:33:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:13:05.235 13:33:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=61985 00:13:05.235 13:33:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:13:05.235 13:33:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 61985 00:13:05.235 13:33:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 61985 ']' 00:13:05.235 13:33:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:05.235 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:05.235 13:33:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:05.235 13:33:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:05.235 13:33:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:05.235 13:33:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:05.235 [2024-11-20 13:33:04.513576] Starting SPDK v25.01-pre git sha1 82b85d9ca / DPDK 24.03.0 initialization... 00:13:05.235 [2024-11-20 13:33:04.513832] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61985 ] 00:13:05.235 [2024-11-20 13:33:04.694649] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:05.494 [2024-11-20 13:33:04.809544] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:05.753 [2024-11-20 13:33:05.018648] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:05.753 [2024-11-20 13:33:05.018714] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:06.012 13:33:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:06.012 13:33:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:13:06.012 13:33:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:13:06.012 13:33:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:06.013 13:33:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:13:06.013 13:33:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:13:06.013 13:33:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:13:06.013 13:33:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:06.013 13:33:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:13:06.013 13:33:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:06.013 13:33:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:13:06.013 13:33:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.013 13:33:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.013 malloc1 00:13:06.013 13:33:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.013 13:33:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:06.013 13:33:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.013 13:33:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.013 [2024-11-20 13:33:05.409863] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:06.013 [2024-11-20 13:33:05.410072] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:06.013 [2024-11-20 13:33:05.410134] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:06.013 [2024-11-20 13:33:05.410228] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:06.013 [2024-11-20 13:33:05.412596] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:06.013 [2024-11-20 13:33:05.412736] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:06.013 pt1 00:13:06.013 13:33:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.013 13:33:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:13:06.013 13:33:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:06.013 13:33:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:13:06.013 13:33:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:13:06.013 13:33:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:13:06.013 13:33:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:06.013 13:33:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:13:06.013 13:33:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:06.013 13:33:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:13:06.013 13:33:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.013 13:33:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.013 malloc2 00:13:06.013 13:33:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.013 13:33:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:06.013 13:33:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.013 13:33:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.013 [2024-11-20 13:33:05.461500] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:06.013 [2024-11-20 13:33:05.461661] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:06.013 [2024-11-20 13:33:05.461723] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:06.013 [2024-11-20 13:33:05.461796] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:06.013 [2024-11-20 13:33:05.464168] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:06.013 [2024-11-20 13:33:05.464205] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:06.013 pt2 00:13:06.013 13:33:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.013 13:33:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:13:06.013 13:33:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:06.013 13:33:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:13:06.013 13:33:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.013 13:33:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.013 [2024-11-20 13:33:05.473578] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:06.013 [2024-11-20 13:33:05.475662] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:06.013 [2024-11-20 13:33:05.475965] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:13:06.013 [2024-11-20 13:33:05.475984] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:13:06.013 [2024-11-20 13:33:05.476288] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:13:06.013 [2024-11-20 13:33:05.476446] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:13:06.013 [2024-11-20 13:33:05.476460] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:13:06.013 [2024-11-20 13:33:05.476629] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:06.013 13:33:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.013 13:33:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:13:06.013 13:33:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:06.013 13:33:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:06.013 13:33:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:06.013 13:33:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:06.013 13:33:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:06.013 13:33:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:06.013 13:33:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:06.013 13:33:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:06.013 13:33:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:06.013 13:33:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:06.013 13:33:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:06.013 13:33:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.013 13:33:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.273 13:33:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.273 13:33:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:06.273 "name": "raid_bdev1", 00:13:06.273 "uuid": "098746b7-d166-4b8f-bc82-808e6e450aa2", 00:13:06.273 "strip_size_kb": 64, 00:13:06.273 "state": "online", 00:13:06.273 "raid_level": "concat", 00:13:06.273 "superblock": true, 00:13:06.273 "num_base_bdevs": 2, 00:13:06.273 "num_base_bdevs_discovered": 2, 00:13:06.273 "num_base_bdevs_operational": 2, 00:13:06.273 "base_bdevs_list": [ 00:13:06.273 { 00:13:06.273 "name": "pt1", 00:13:06.273 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:06.273 "is_configured": true, 00:13:06.273 "data_offset": 2048, 00:13:06.273 "data_size": 63488 00:13:06.273 }, 00:13:06.273 { 00:13:06.273 "name": "pt2", 00:13:06.273 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:06.273 "is_configured": true, 00:13:06.273 "data_offset": 2048, 00:13:06.273 "data_size": 63488 00:13:06.273 } 00:13:06.273 ] 00:13:06.273 }' 00:13:06.273 13:33:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:06.273 13:33:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.535 13:33:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:13:06.535 13:33:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:13:06.535 13:33:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:06.535 13:33:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:06.535 13:33:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:06.535 13:33:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:06.535 13:33:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:06.535 13:33:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:06.535 13:33:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.535 13:33:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.535 [2024-11-20 13:33:05.905470] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:06.535 13:33:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.535 13:33:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:06.535 "name": "raid_bdev1", 00:13:06.535 "aliases": [ 00:13:06.535 "098746b7-d166-4b8f-bc82-808e6e450aa2" 00:13:06.535 ], 00:13:06.535 "product_name": "Raid Volume", 00:13:06.535 "block_size": 512, 00:13:06.535 "num_blocks": 126976, 00:13:06.535 "uuid": "098746b7-d166-4b8f-bc82-808e6e450aa2", 00:13:06.535 "assigned_rate_limits": { 00:13:06.535 "rw_ios_per_sec": 0, 00:13:06.535 "rw_mbytes_per_sec": 0, 00:13:06.535 "r_mbytes_per_sec": 0, 00:13:06.535 "w_mbytes_per_sec": 0 00:13:06.535 }, 00:13:06.535 "claimed": false, 00:13:06.535 "zoned": false, 00:13:06.535 "supported_io_types": { 00:13:06.535 "read": true, 00:13:06.535 "write": true, 00:13:06.535 "unmap": true, 00:13:06.535 "flush": true, 00:13:06.535 "reset": true, 00:13:06.535 "nvme_admin": false, 00:13:06.535 "nvme_io": false, 00:13:06.535 "nvme_io_md": false, 00:13:06.535 "write_zeroes": true, 00:13:06.535 "zcopy": false, 00:13:06.535 "get_zone_info": false, 00:13:06.535 "zone_management": false, 00:13:06.535 "zone_append": false, 00:13:06.535 "compare": false, 00:13:06.535 "compare_and_write": false, 00:13:06.535 "abort": false, 00:13:06.535 "seek_hole": false, 00:13:06.535 "seek_data": false, 00:13:06.535 "copy": false, 00:13:06.535 "nvme_iov_md": false 00:13:06.535 }, 00:13:06.535 "memory_domains": [ 00:13:06.535 { 00:13:06.535 "dma_device_id": "system", 00:13:06.535 "dma_device_type": 1 00:13:06.535 }, 00:13:06.535 { 00:13:06.535 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:06.535 "dma_device_type": 2 00:13:06.535 }, 00:13:06.535 { 00:13:06.535 "dma_device_id": "system", 00:13:06.535 "dma_device_type": 1 00:13:06.535 }, 00:13:06.535 { 00:13:06.535 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:06.535 "dma_device_type": 2 00:13:06.535 } 00:13:06.535 ], 00:13:06.535 "driver_specific": { 00:13:06.535 "raid": { 00:13:06.535 "uuid": "098746b7-d166-4b8f-bc82-808e6e450aa2", 00:13:06.535 "strip_size_kb": 64, 00:13:06.535 "state": "online", 00:13:06.535 "raid_level": "concat", 00:13:06.535 "superblock": true, 00:13:06.535 "num_base_bdevs": 2, 00:13:06.535 "num_base_bdevs_discovered": 2, 00:13:06.535 "num_base_bdevs_operational": 2, 00:13:06.535 "base_bdevs_list": [ 00:13:06.535 { 00:13:06.535 "name": "pt1", 00:13:06.535 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:06.535 "is_configured": true, 00:13:06.535 "data_offset": 2048, 00:13:06.535 "data_size": 63488 00:13:06.535 }, 00:13:06.535 { 00:13:06.535 "name": "pt2", 00:13:06.535 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:06.535 "is_configured": true, 00:13:06.535 "data_offset": 2048, 00:13:06.535 "data_size": 63488 00:13:06.535 } 00:13:06.535 ] 00:13:06.535 } 00:13:06.535 } 00:13:06.535 }' 00:13:06.535 13:33:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:06.535 13:33:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:13:06.535 pt2' 00:13:06.535 13:33:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:06.840 13:33:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:06.840 13:33:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:06.840 13:33:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:13:06.840 13:33:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.840 13:33:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:06.840 13:33:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.840 13:33:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.840 13:33:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:06.840 13:33:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:06.840 13:33:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:06.840 13:33:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:13:06.840 13:33:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.840 13:33:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.840 13:33:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:06.840 13:33:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.840 13:33:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:06.840 13:33:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:06.840 13:33:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:13:06.840 13:33:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:06.840 13:33:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.840 13:33:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.840 [2024-11-20 13:33:06.141459] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:06.840 13:33:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.840 13:33:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=098746b7-d166-4b8f-bc82-808e6e450aa2 00:13:06.840 13:33:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 098746b7-d166-4b8f-bc82-808e6e450aa2 ']' 00:13:06.840 13:33:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:06.840 13:33:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.840 13:33:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.840 [2024-11-20 13:33:06.181180] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:06.841 [2024-11-20 13:33:06.181313] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:06.841 [2024-11-20 13:33:06.181426] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:06.841 [2024-11-20 13:33:06.181477] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:06.841 [2024-11-20 13:33:06.181492] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:13:06.841 13:33:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.841 13:33:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:06.841 13:33:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.841 13:33:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:13:06.841 13:33:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.841 13:33:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.841 13:33:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:13:06.841 13:33:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:13:06.841 13:33:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:06.841 13:33:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:13:06.841 13:33:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.841 13:33:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.841 13:33:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.841 13:33:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:06.841 13:33:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:13:06.841 13:33:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.841 13:33:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.841 13:33:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.841 13:33:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:13:06.841 13:33:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:13:06.841 13:33:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.841 13:33:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.841 13:33:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.841 13:33:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:13:06.841 13:33:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:13:06.841 13:33:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:13:06.841 13:33:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:13:06.841 13:33:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:13:06.841 13:33:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:06.841 13:33:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:13:06.841 13:33:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:06.841 13:33:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:13:06.841 13:33:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.841 13:33:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.841 [2024-11-20 13:33:06.309258] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:13:06.841 [2024-11-20 13:33:06.311523] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:13:06.841 [2024-11-20 13:33:06.311594] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:13:06.841 [2024-11-20 13:33:06.312194] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:13:06.841 [2024-11-20 13:33:06.312314] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:06.841 [2024-11-20 13:33:06.312330] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:13:06.841 request: 00:13:06.841 { 00:13:06.841 "name": "raid_bdev1", 00:13:06.841 "raid_level": "concat", 00:13:06.841 "base_bdevs": [ 00:13:06.841 "malloc1", 00:13:06.841 "malloc2" 00:13:06.841 ], 00:13:06.841 "strip_size_kb": 64, 00:13:06.841 "superblock": false, 00:13:06.841 "method": "bdev_raid_create", 00:13:06.841 "req_id": 1 00:13:06.841 } 00:13:06.841 Got JSON-RPC error response 00:13:06.841 response: 00:13:06.841 { 00:13:06.841 "code": -17, 00:13:06.841 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:13:06.841 } 00:13:06.841 13:33:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:13:06.841 13:33:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:13:06.841 13:33:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:06.841 13:33:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:06.841 13:33:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:06.841 13:33:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:13:06.841 13:33:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:06.841 13:33:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.841 13:33:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:07.100 13:33:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.100 13:33:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:13:07.100 13:33:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:13:07.100 13:33:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:07.100 13:33:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.100 13:33:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:07.100 [2024-11-20 13:33:06.353121] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:07.100 [2024-11-20 13:33:06.353290] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:07.100 [2024-11-20 13:33:06.353379] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:13:07.100 [2024-11-20 13:33:06.353446] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:07.100 [2024-11-20 13:33:06.355891] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:07.100 [2024-11-20 13:33:06.356031] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:07.100 [2024-11-20 13:33:06.356176] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:13:07.100 [2024-11-20 13:33:06.356243] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:07.100 pt1 00:13:07.100 13:33:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.100 13:33:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 2 00:13:07.100 13:33:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:07.100 13:33:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:07.100 13:33:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:07.100 13:33:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:07.100 13:33:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:07.100 13:33:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:07.100 13:33:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:07.100 13:33:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:07.100 13:33:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:07.100 13:33:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:07.100 13:33:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:07.100 13:33:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.100 13:33:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:07.100 13:33:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.100 13:33:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:07.100 "name": "raid_bdev1", 00:13:07.100 "uuid": "098746b7-d166-4b8f-bc82-808e6e450aa2", 00:13:07.100 "strip_size_kb": 64, 00:13:07.100 "state": "configuring", 00:13:07.100 "raid_level": "concat", 00:13:07.100 "superblock": true, 00:13:07.100 "num_base_bdevs": 2, 00:13:07.100 "num_base_bdevs_discovered": 1, 00:13:07.100 "num_base_bdevs_operational": 2, 00:13:07.100 "base_bdevs_list": [ 00:13:07.100 { 00:13:07.100 "name": "pt1", 00:13:07.100 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:07.100 "is_configured": true, 00:13:07.101 "data_offset": 2048, 00:13:07.101 "data_size": 63488 00:13:07.101 }, 00:13:07.101 { 00:13:07.101 "name": null, 00:13:07.101 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:07.101 "is_configured": false, 00:13:07.101 "data_offset": 2048, 00:13:07.101 "data_size": 63488 00:13:07.101 } 00:13:07.101 ] 00:13:07.101 }' 00:13:07.101 13:33:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:07.101 13:33:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:07.360 13:33:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:13:07.361 13:33:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:13:07.361 13:33:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:13:07.361 13:33:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:07.361 13:33:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.361 13:33:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:07.361 [2024-11-20 13:33:06.752573] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:07.361 [2024-11-20 13:33:06.752792] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:07.361 [2024-11-20 13:33:06.752863] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:13:07.361 [2024-11-20 13:33:06.752920] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:07.361 [2024-11-20 13:33:06.753422] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:07.361 [2024-11-20 13:33:06.753548] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:07.361 [2024-11-20 13:33:06.753697] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:13:07.361 [2024-11-20 13:33:06.753731] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:07.361 [2024-11-20 13:33:06.753843] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:13:07.361 [2024-11-20 13:33:06.753856] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:13:07.361 [2024-11-20 13:33:06.754123] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:13:07.361 [2024-11-20 13:33:06.754253] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:13:07.361 [2024-11-20 13:33:06.754272] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:13:07.361 [2024-11-20 13:33:06.754432] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:07.361 pt2 00:13:07.361 13:33:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.361 13:33:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:13:07.361 13:33:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:13:07.361 13:33:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:13:07.361 13:33:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:07.361 13:33:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:07.361 13:33:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:07.361 13:33:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:07.361 13:33:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:07.361 13:33:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:07.361 13:33:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:07.361 13:33:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:07.361 13:33:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:07.361 13:33:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:07.361 13:33:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.361 13:33:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:07.361 13:33:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:07.361 13:33:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.361 13:33:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:07.361 "name": "raid_bdev1", 00:13:07.361 "uuid": "098746b7-d166-4b8f-bc82-808e6e450aa2", 00:13:07.361 "strip_size_kb": 64, 00:13:07.361 "state": "online", 00:13:07.361 "raid_level": "concat", 00:13:07.361 "superblock": true, 00:13:07.361 "num_base_bdevs": 2, 00:13:07.361 "num_base_bdevs_discovered": 2, 00:13:07.361 "num_base_bdevs_operational": 2, 00:13:07.361 "base_bdevs_list": [ 00:13:07.361 { 00:13:07.361 "name": "pt1", 00:13:07.361 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:07.361 "is_configured": true, 00:13:07.361 "data_offset": 2048, 00:13:07.361 "data_size": 63488 00:13:07.361 }, 00:13:07.361 { 00:13:07.361 "name": "pt2", 00:13:07.361 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:07.361 "is_configured": true, 00:13:07.361 "data_offset": 2048, 00:13:07.361 "data_size": 63488 00:13:07.361 } 00:13:07.361 ] 00:13:07.361 }' 00:13:07.361 13:33:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:07.361 13:33:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:07.930 13:33:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:13:07.930 13:33:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:13:07.930 13:33:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:07.930 13:33:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:07.930 13:33:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:07.930 13:33:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:07.930 13:33:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:07.930 13:33:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:07.930 13:33:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.930 13:33:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:07.930 [2024-11-20 13:33:07.184194] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:07.930 13:33:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.930 13:33:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:07.930 "name": "raid_bdev1", 00:13:07.930 "aliases": [ 00:13:07.930 "098746b7-d166-4b8f-bc82-808e6e450aa2" 00:13:07.930 ], 00:13:07.930 "product_name": "Raid Volume", 00:13:07.930 "block_size": 512, 00:13:07.930 "num_blocks": 126976, 00:13:07.930 "uuid": "098746b7-d166-4b8f-bc82-808e6e450aa2", 00:13:07.930 "assigned_rate_limits": { 00:13:07.930 "rw_ios_per_sec": 0, 00:13:07.930 "rw_mbytes_per_sec": 0, 00:13:07.930 "r_mbytes_per_sec": 0, 00:13:07.930 "w_mbytes_per_sec": 0 00:13:07.930 }, 00:13:07.930 "claimed": false, 00:13:07.930 "zoned": false, 00:13:07.930 "supported_io_types": { 00:13:07.930 "read": true, 00:13:07.930 "write": true, 00:13:07.930 "unmap": true, 00:13:07.930 "flush": true, 00:13:07.930 "reset": true, 00:13:07.930 "nvme_admin": false, 00:13:07.930 "nvme_io": false, 00:13:07.930 "nvme_io_md": false, 00:13:07.930 "write_zeroes": true, 00:13:07.930 "zcopy": false, 00:13:07.930 "get_zone_info": false, 00:13:07.930 "zone_management": false, 00:13:07.930 "zone_append": false, 00:13:07.930 "compare": false, 00:13:07.930 "compare_and_write": false, 00:13:07.930 "abort": false, 00:13:07.930 "seek_hole": false, 00:13:07.930 "seek_data": false, 00:13:07.930 "copy": false, 00:13:07.930 "nvme_iov_md": false 00:13:07.930 }, 00:13:07.930 "memory_domains": [ 00:13:07.930 { 00:13:07.930 "dma_device_id": "system", 00:13:07.930 "dma_device_type": 1 00:13:07.930 }, 00:13:07.930 { 00:13:07.930 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:07.930 "dma_device_type": 2 00:13:07.930 }, 00:13:07.930 { 00:13:07.930 "dma_device_id": "system", 00:13:07.930 "dma_device_type": 1 00:13:07.930 }, 00:13:07.930 { 00:13:07.930 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:07.930 "dma_device_type": 2 00:13:07.930 } 00:13:07.930 ], 00:13:07.930 "driver_specific": { 00:13:07.930 "raid": { 00:13:07.930 "uuid": "098746b7-d166-4b8f-bc82-808e6e450aa2", 00:13:07.930 "strip_size_kb": 64, 00:13:07.930 "state": "online", 00:13:07.930 "raid_level": "concat", 00:13:07.930 "superblock": true, 00:13:07.930 "num_base_bdevs": 2, 00:13:07.930 "num_base_bdevs_discovered": 2, 00:13:07.930 "num_base_bdevs_operational": 2, 00:13:07.930 "base_bdevs_list": [ 00:13:07.930 { 00:13:07.930 "name": "pt1", 00:13:07.930 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:07.930 "is_configured": true, 00:13:07.930 "data_offset": 2048, 00:13:07.930 "data_size": 63488 00:13:07.930 }, 00:13:07.930 { 00:13:07.930 "name": "pt2", 00:13:07.930 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:07.930 "is_configured": true, 00:13:07.930 "data_offset": 2048, 00:13:07.930 "data_size": 63488 00:13:07.930 } 00:13:07.930 ] 00:13:07.930 } 00:13:07.930 } 00:13:07.930 }' 00:13:07.930 13:33:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:07.930 13:33:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:13:07.930 pt2' 00:13:07.930 13:33:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:07.930 13:33:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:07.930 13:33:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:07.930 13:33:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:13:07.930 13:33:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.930 13:33:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:07.930 13:33:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:07.931 13:33:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.931 13:33:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:07.931 13:33:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:07.931 13:33:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:07.931 13:33:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:07.931 13:33:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:13:07.931 13:33:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.931 13:33:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:07.931 13:33:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.190 13:33:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:08.190 13:33:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:08.190 13:33:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:08.190 13:33:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.190 13:33:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:08.190 13:33:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:13:08.190 [2024-11-20 13:33:07.423827] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:08.190 13:33:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.190 13:33:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 098746b7-d166-4b8f-bc82-808e6e450aa2 '!=' 098746b7-d166-4b8f-bc82-808e6e450aa2 ']' 00:13:08.190 13:33:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:13:08.190 13:33:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:08.190 13:33:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:13:08.190 13:33:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 61985 00:13:08.190 13:33:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 61985 ']' 00:13:08.190 13:33:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 61985 00:13:08.190 13:33:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:13:08.190 13:33:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:08.190 13:33:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61985 00:13:08.190 13:33:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:08.190 13:33:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:08.190 killing process with pid 61985 00:13:08.190 13:33:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61985' 00:13:08.190 13:33:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 61985 00:13:08.190 [2024-11-20 13:33:07.502107] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:08.190 [2024-11-20 13:33:07.502195] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:08.190 [2024-11-20 13:33:07.502243] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:08.190 [2024-11-20 13:33:07.502260] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:13:08.190 13:33:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 61985 00:13:08.449 [2024-11-20 13:33:07.714630] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:09.392 13:33:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:13:09.392 00:13:09.392 real 0m4.451s 00:13:09.392 user 0m6.202s 00:13:09.392 sys 0m0.840s 00:13:09.392 13:33:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:09.392 13:33:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.392 ************************************ 00:13:09.392 END TEST raid_superblock_test 00:13:09.392 ************************************ 00:13:09.650 13:33:08 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 2 read 00:13:09.650 13:33:08 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:13:09.650 13:33:08 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:09.650 13:33:08 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:09.650 ************************************ 00:13:09.650 START TEST raid_read_error_test 00:13:09.650 ************************************ 00:13:09.650 13:33:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 2 read 00:13:09.650 13:33:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:13:09.650 13:33:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:13:09.650 13:33:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:13:09.650 13:33:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:13:09.650 13:33:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:09.650 13:33:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:13:09.650 13:33:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:09.651 13:33:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:09.651 13:33:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:13:09.651 13:33:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:09.651 13:33:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:09.651 13:33:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:13:09.651 13:33:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:13:09.651 13:33:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:13:09.651 13:33:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:13:09.651 13:33:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:13:09.651 13:33:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:13:09.651 13:33:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:13:09.651 13:33:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:13:09.651 13:33:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:13:09.651 13:33:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:13:09.651 13:33:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:13:09.651 13:33:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.nrwZhB26Q6 00:13:09.651 13:33:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=62202 00:13:09.651 13:33:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 62202 00:13:09.651 13:33:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:13:09.651 13:33:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 62202 ']' 00:13:09.651 13:33:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:09.651 13:33:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:09.651 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:09.651 13:33:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:09.651 13:33:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:09.651 13:33:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.651 [2024-11-20 13:33:09.062353] Starting SPDK v25.01-pre git sha1 82b85d9ca / DPDK 24.03.0 initialization... 00:13:09.651 [2024-11-20 13:33:09.062477] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62202 ] 00:13:09.910 [2024-11-20 13:33:09.242931] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:09.910 [2024-11-20 13:33:09.364747] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:10.168 [2024-11-20 13:33:09.556151] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:10.168 [2024-11-20 13:33:09.556214] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:10.736 13:33:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:10.736 13:33:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:13:10.736 13:33:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:10.736 13:33:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:10.736 13:33:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.736 13:33:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.736 BaseBdev1_malloc 00:13:10.736 13:33:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.736 13:33:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:13:10.736 13:33:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.736 13:33:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.736 true 00:13:10.736 13:33:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.736 13:33:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:13:10.736 13:33:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.736 13:33:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.736 [2024-11-20 13:33:10.093222] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:13:10.736 [2024-11-20 13:33:10.093277] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:10.736 [2024-11-20 13:33:10.093303] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:13:10.736 [2024-11-20 13:33:10.093318] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:10.736 [2024-11-20 13:33:10.095632] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:10.737 [2024-11-20 13:33:10.095676] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:10.737 BaseBdev1 00:13:10.737 13:33:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.737 13:33:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:10.737 13:33:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:10.737 13:33:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.737 13:33:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.737 BaseBdev2_malloc 00:13:10.737 13:33:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.737 13:33:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:13:10.737 13:33:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.737 13:33:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.737 true 00:13:10.737 13:33:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.737 13:33:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:13:10.737 13:33:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.737 13:33:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.737 [2024-11-20 13:33:10.159735] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:13:10.737 [2024-11-20 13:33:10.159793] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:10.737 [2024-11-20 13:33:10.159811] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:13:10.737 [2024-11-20 13:33:10.159825] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:10.737 [2024-11-20 13:33:10.162153] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:10.737 [2024-11-20 13:33:10.162193] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:10.737 BaseBdev2 00:13:10.737 13:33:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.737 13:33:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:13:10.737 13:33:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.737 13:33:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.737 [2024-11-20 13:33:10.171780] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:10.737 [2024-11-20 13:33:10.173830] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:10.737 [2024-11-20 13:33:10.174023] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:13:10.737 [2024-11-20 13:33:10.174040] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:13:10.737 [2024-11-20 13:33:10.174300] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:13:10.737 [2024-11-20 13:33:10.174466] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:13:10.737 [2024-11-20 13:33:10.174489] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:13:10.737 [2024-11-20 13:33:10.174627] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:10.737 13:33:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.737 13:33:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:13:10.737 13:33:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:10.737 13:33:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:10.737 13:33:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:10.737 13:33:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:10.737 13:33:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:10.737 13:33:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:10.737 13:33:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:10.737 13:33:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:10.737 13:33:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:10.737 13:33:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:10.737 13:33:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.737 13:33:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:10.737 13:33:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.737 13:33:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.996 13:33:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:10.996 "name": "raid_bdev1", 00:13:10.996 "uuid": "85c44c8f-21d3-474e-84f5-25492f8578b0", 00:13:10.996 "strip_size_kb": 64, 00:13:10.996 "state": "online", 00:13:10.996 "raid_level": "concat", 00:13:10.996 "superblock": true, 00:13:10.996 "num_base_bdevs": 2, 00:13:10.996 "num_base_bdevs_discovered": 2, 00:13:10.996 "num_base_bdevs_operational": 2, 00:13:10.996 "base_bdevs_list": [ 00:13:10.996 { 00:13:10.996 "name": "BaseBdev1", 00:13:10.996 "uuid": "19effa65-4f9b-5fb4-b5f6-16b2f4cd1fbe", 00:13:10.996 "is_configured": true, 00:13:10.996 "data_offset": 2048, 00:13:10.996 "data_size": 63488 00:13:10.996 }, 00:13:10.996 { 00:13:10.996 "name": "BaseBdev2", 00:13:10.996 "uuid": "7a869486-d70c-5b29-8214-ce7c8ca40885", 00:13:10.996 "is_configured": true, 00:13:10.996 "data_offset": 2048, 00:13:10.996 "data_size": 63488 00:13:10.996 } 00:13:10.996 ] 00:13:10.996 }' 00:13:10.996 13:33:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:10.996 13:33:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.255 13:33:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:13:11.255 13:33:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:11.255 [2024-11-20 13:33:10.692589] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:13:12.217 13:33:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:13:12.217 13:33:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.217 13:33:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.217 13:33:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.217 13:33:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:13:12.218 13:33:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:13:12.218 13:33:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:13:12.218 13:33:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:13:12.218 13:33:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:12.218 13:33:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:12.218 13:33:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:12.218 13:33:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:12.218 13:33:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:12.218 13:33:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:12.218 13:33:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:12.218 13:33:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:12.218 13:33:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:12.218 13:33:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:12.218 13:33:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:12.218 13:33:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.218 13:33:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.218 13:33:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.218 13:33:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:12.218 "name": "raid_bdev1", 00:13:12.218 "uuid": "85c44c8f-21d3-474e-84f5-25492f8578b0", 00:13:12.218 "strip_size_kb": 64, 00:13:12.218 "state": "online", 00:13:12.218 "raid_level": "concat", 00:13:12.218 "superblock": true, 00:13:12.218 "num_base_bdevs": 2, 00:13:12.218 "num_base_bdevs_discovered": 2, 00:13:12.218 "num_base_bdevs_operational": 2, 00:13:12.218 "base_bdevs_list": [ 00:13:12.218 { 00:13:12.218 "name": "BaseBdev1", 00:13:12.218 "uuid": "19effa65-4f9b-5fb4-b5f6-16b2f4cd1fbe", 00:13:12.218 "is_configured": true, 00:13:12.218 "data_offset": 2048, 00:13:12.218 "data_size": 63488 00:13:12.218 }, 00:13:12.218 { 00:13:12.218 "name": "BaseBdev2", 00:13:12.218 "uuid": "7a869486-d70c-5b29-8214-ce7c8ca40885", 00:13:12.218 "is_configured": true, 00:13:12.218 "data_offset": 2048, 00:13:12.218 "data_size": 63488 00:13:12.218 } 00:13:12.218 ] 00:13:12.218 }' 00:13:12.218 13:33:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:12.218 13:33:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.786 13:33:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:12.786 13:33:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.786 13:33:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.786 [2024-11-20 13:33:12.026881] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:12.786 [2024-11-20 13:33:12.026924] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:12.786 [2024-11-20 13:33:12.029551] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:12.786 [2024-11-20 13:33:12.029600] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:12.786 [2024-11-20 13:33:12.029631] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:12.786 [2024-11-20 13:33:12.029648] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:13:12.786 { 00:13:12.786 "results": [ 00:13:12.786 { 00:13:12.786 "job": "raid_bdev1", 00:13:12.786 "core_mask": "0x1", 00:13:12.786 "workload": "randrw", 00:13:12.786 "percentage": 50, 00:13:12.786 "status": "finished", 00:13:12.786 "queue_depth": 1, 00:13:12.786 "io_size": 131072, 00:13:12.786 "runtime": 1.334528, 00:13:12.786 "iops": 16879.376079033187, 00:13:12.786 "mibps": 2109.9220098791484, 00:13:12.786 "io_failed": 1, 00:13:12.786 "io_timeout": 0, 00:13:12.786 "avg_latency_us": 81.36115080466583, 00:13:12.786 "min_latency_us": 26.936546184738955, 00:13:12.786 "max_latency_us": 1414.6827309236949 00:13:12.786 } 00:13:12.786 ], 00:13:12.786 "core_count": 1 00:13:12.786 } 00:13:12.786 13:33:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.786 13:33:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 62202 00:13:12.786 13:33:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 62202 ']' 00:13:12.786 13:33:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 62202 00:13:12.786 13:33:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:13:12.786 13:33:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:12.786 13:33:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62202 00:13:12.786 13:33:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:12.786 13:33:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:12.786 killing process with pid 62202 00:13:12.786 13:33:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62202' 00:13:12.786 13:33:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 62202 00:13:12.786 [2024-11-20 13:33:12.076991] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:12.786 13:33:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 62202 00:13:12.786 [2024-11-20 13:33:12.212612] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:14.163 13:33:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:13:14.163 13:33:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.nrwZhB26Q6 00:13:14.163 13:33:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:13:14.163 13:33:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.75 00:13:14.163 13:33:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:13:14.163 13:33:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:14.163 13:33:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:13:14.163 13:33:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.75 != \0\.\0\0 ]] 00:13:14.163 00:13:14.163 real 0m4.487s 00:13:14.163 user 0m5.401s 00:13:14.163 sys 0m0.594s 00:13:14.163 13:33:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:14.163 13:33:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.163 ************************************ 00:13:14.163 END TEST raid_read_error_test 00:13:14.163 ************************************ 00:13:14.163 13:33:13 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 2 write 00:13:14.163 13:33:13 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:13:14.163 13:33:13 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:14.163 13:33:13 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:14.163 ************************************ 00:13:14.163 START TEST raid_write_error_test 00:13:14.163 ************************************ 00:13:14.163 13:33:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 2 write 00:13:14.163 13:33:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:13:14.163 13:33:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:13:14.163 13:33:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:13:14.163 13:33:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:13:14.163 13:33:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:14.163 13:33:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:13:14.163 13:33:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:14.163 13:33:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:14.163 13:33:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:13:14.163 13:33:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:14.163 13:33:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:14.163 13:33:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:13:14.163 13:33:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:13:14.163 13:33:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:13:14.163 13:33:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:13:14.163 13:33:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:13:14.163 13:33:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:13:14.163 13:33:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:13:14.163 13:33:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:13:14.163 13:33:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:13:14.163 13:33:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:13:14.163 13:33:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:13:14.163 13:33:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.6dzxpdRgoD 00:13:14.163 13:33:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=62342 00:13:14.163 13:33:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 62342 00:13:14.163 13:33:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:13:14.163 13:33:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 62342 ']' 00:13:14.163 13:33:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:14.163 13:33:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:14.163 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:14.163 13:33:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:14.163 13:33:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:14.163 13:33:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.163 [2024-11-20 13:33:13.613928] Starting SPDK v25.01-pre git sha1 82b85d9ca / DPDK 24.03.0 initialization... 00:13:14.163 [2024-11-20 13:33:13.614052] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62342 ] 00:13:14.422 [2024-11-20 13:33:13.777028] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:14.422 [2024-11-20 13:33:13.894409] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:14.682 [2024-11-20 13:33:14.108920] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:14.683 [2024-11-20 13:33:14.108977] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:15.252 13:33:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:15.252 13:33:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:13:15.252 13:33:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:15.252 13:33:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:15.252 13:33:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.252 13:33:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.252 BaseBdev1_malloc 00:13:15.252 13:33:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.252 13:33:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:13:15.252 13:33:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.252 13:33:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.252 true 00:13:15.252 13:33:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.252 13:33:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:13:15.252 13:33:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.252 13:33:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.252 [2024-11-20 13:33:14.543885] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:13:15.252 [2024-11-20 13:33:14.544095] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:15.252 [2024-11-20 13:33:14.544130] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:13:15.252 [2024-11-20 13:33:14.544145] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:15.252 [2024-11-20 13:33:14.546607] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:15.252 [2024-11-20 13:33:14.546653] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:15.252 BaseBdev1 00:13:15.252 13:33:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.252 13:33:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:15.252 13:33:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:15.252 13:33:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.252 13:33:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.252 BaseBdev2_malloc 00:13:15.252 13:33:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.252 13:33:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:13:15.252 13:33:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.252 13:33:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.252 true 00:13:15.252 13:33:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.252 13:33:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:13:15.252 13:33:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.252 13:33:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.252 [2024-11-20 13:33:14.611779] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:13:15.252 [2024-11-20 13:33:14.611961] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:15.252 [2024-11-20 13:33:14.612015] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:13:15.252 [2024-11-20 13:33:14.612032] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:15.252 [2024-11-20 13:33:14.614394] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:15.252 [2024-11-20 13:33:14.614537] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:15.252 BaseBdev2 00:13:15.252 13:33:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.252 13:33:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:13:15.252 13:33:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.252 13:33:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.252 [2024-11-20 13:33:14.623827] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:15.252 [2024-11-20 13:33:14.625955] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:15.252 [2024-11-20 13:33:14.626260] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:13:15.252 [2024-11-20 13:33:14.626321] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:13:15.252 [2024-11-20 13:33:14.626640] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:13:15.252 [2024-11-20 13:33:14.626905] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:13:15.252 [2024-11-20 13:33:14.627032] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:13:15.252 [2024-11-20 13:33:14.627322] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:15.252 13:33:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.252 13:33:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:13:15.252 13:33:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:15.252 13:33:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:15.252 13:33:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:15.252 13:33:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:15.252 13:33:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:15.252 13:33:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:15.252 13:33:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:15.252 13:33:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:15.252 13:33:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:15.252 13:33:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:15.252 13:33:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.252 13:33:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:15.252 13:33:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.252 13:33:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.252 13:33:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:15.252 "name": "raid_bdev1", 00:13:15.252 "uuid": "fd3f2e4e-30e6-40e4-a57a-0cc5e028517b", 00:13:15.252 "strip_size_kb": 64, 00:13:15.252 "state": "online", 00:13:15.252 "raid_level": "concat", 00:13:15.252 "superblock": true, 00:13:15.252 "num_base_bdevs": 2, 00:13:15.252 "num_base_bdevs_discovered": 2, 00:13:15.252 "num_base_bdevs_operational": 2, 00:13:15.252 "base_bdevs_list": [ 00:13:15.252 { 00:13:15.252 "name": "BaseBdev1", 00:13:15.252 "uuid": "e3a0b43e-9039-5716-bffc-2dcb0ca51bdc", 00:13:15.252 "is_configured": true, 00:13:15.252 "data_offset": 2048, 00:13:15.252 "data_size": 63488 00:13:15.252 }, 00:13:15.252 { 00:13:15.252 "name": "BaseBdev2", 00:13:15.252 "uuid": "b50ee439-a7ba-5505-91c0-4d068dc0a681", 00:13:15.252 "is_configured": true, 00:13:15.252 "data_offset": 2048, 00:13:15.252 "data_size": 63488 00:13:15.252 } 00:13:15.252 ] 00:13:15.252 }' 00:13:15.252 13:33:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:15.252 13:33:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.821 13:33:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:13:15.821 13:33:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:15.821 [2024-11-20 13:33:15.136544] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:13:16.757 13:33:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:13:16.757 13:33:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.757 13:33:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.757 13:33:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.757 13:33:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:13:16.757 13:33:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:13:16.757 13:33:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:13:16.757 13:33:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:13:16.757 13:33:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:16.757 13:33:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:16.757 13:33:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:16.757 13:33:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:16.757 13:33:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:16.757 13:33:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:16.757 13:33:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:16.757 13:33:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:16.757 13:33:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:16.757 13:33:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:16.757 13:33:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:16.757 13:33:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.757 13:33:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.757 13:33:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.757 13:33:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:16.757 "name": "raid_bdev1", 00:13:16.757 "uuid": "fd3f2e4e-30e6-40e4-a57a-0cc5e028517b", 00:13:16.757 "strip_size_kb": 64, 00:13:16.757 "state": "online", 00:13:16.757 "raid_level": "concat", 00:13:16.757 "superblock": true, 00:13:16.757 "num_base_bdevs": 2, 00:13:16.757 "num_base_bdevs_discovered": 2, 00:13:16.757 "num_base_bdevs_operational": 2, 00:13:16.757 "base_bdevs_list": [ 00:13:16.757 { 00:13:16.757 "name": "BaseBdev1", 00:13:16.757 "uuid": "e3a0b43e-9039-5716-bffc-2dcb0ca51bdc", 00:13:16.757 "is_configured": true, 00:13:16.757 "data_offset": 2048, 00:13:16.757 "data_size": 63488 00:13:16.757 }, 00:13:16.757 { 00:13:16.757 "name": "BaseBdev2", 00:13:16.757 "uuid": "b50ee439-a7ba-5505-91c0-4d068dc0a681", 00:13:16.757 "is_configured": true, 00:13:16.757 "data_offset": 2048, 00:13:16.757 "data_size": 63488 00:13:16.757 } 00:13:16.757 ] 00:13:16.757 }' 00:13:16.757 13:33:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:16.757 13:33:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.015 13:33:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:17.015 13:33:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.015 13:33:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.015 [2024-11-20 13:33:16.443507] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:17.015 [2024-11-20 13:33:16.443684] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:17.015 [2024-11-20 13:33:16.446653] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:17.015 [2024-11-20 13:33:16.446809] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:17.015 [2024-11-20 13:33:16.446881] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:17.015 [2024-11-20 13:33:16.447008] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:13:17.015 { 00:13:17.015 "results": [ 00:13:17.015 { 00:13:17.015 "job": "raid_bdev1", 00:13:17.015 "core_mask": "0x1", 00:13:17.015 "workload": "randrw", 00:13:17.015 "percentage": 50, 00:13:17.015 "status": "finished", 00:13:17.015 "queue_depth": 1, 00:13:17.015 "io_size": 131072, 00:13:17.015 "runtime": 1.307141, 00:13:17.015 "iops": 15948.547249302103, 00:13:17.015 "mibps": 1993.5684061627628, 00:13:17.015 "io_failed": 1, 00:13:17.015 "io_timeout": 0, 00:13:17.015 "avg_latency_us": 86.31625874179758, 00:13:17.016 "min_latency_us": 26.936546184738955, 00:13:17.016 "max_latency_us": 1533.1212851405623 00:13:17.016 } 00:13:17.016 ], 00:13:17.016 "core_count": 1 00:13:17.016 } 00:13:17.016 13:33:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.016 13:33:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 62342 00:13:17.016 13:33:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 62342 ']' 00:13:17.016 13:33:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 62342 00:13:17.016 13:33:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:13:17.016 13:33:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:17.016 13:33:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62342 00:13:17.016 killing process with pid 62342 00:13:17.016 13:33:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:17.016 13:33:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:17.016 13:33:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62342' 00:13:17.016 13:33:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 62342 00:13:17.016 13:33:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 62342 00:13:17.016 [2024-11-20 13:33:16.483274] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:17.273 [2024-11-20 13:33:16.631027] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:18.650 13:33:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.6dzxpdRgoD 00:13:18.650 13:33:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:13:18.650 13:33:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:13:18.650 13:33:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.77 00:13:18.650 13:33:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:13:18.650 ************************************ 00:13:18.651 END TEST raid_write_error_test 00:13:18.651 ************************************ 00:13:18.651 13:33:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:18.651 13:33:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:13:18.651 13:33:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.77 != \0\.\0\0 ]] 00:13:18.651 00:13:18.651 real 0m4.341s 00:13:18.651 user 0m5.143s 00:13:18.651 sys 0m0.566s 00:13:18.651 13:33:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:18.651 13:33:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.651 13:33:17 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:13:18.651 13:33:17 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 2 false 00:13:18.651 13:33:17 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:13:18.651 13:33:17 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:18.651 13:33:17 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:18.651 ************************************ 00:13:18.651 START TEST raid_state_function_test 00:13:18.651 ************************************ 00:13:18.651 13:33:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 false 00:13:18.651 13:33:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:13:18.651 13:33:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:13:18.651 13:33:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:13:18.651 13:33:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:13:18.651 13:33:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:13:18.651 13:33:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:18.651 13:33:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:13:18.651 13:33:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:18.651 13:33:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:18.651 13:33:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:13:18.651 13:33:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:18.651 13:33:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:18.651 13:33:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:13:18.651 13:33:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:13:18.651 13:33:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:13:18.651 13:33:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:13:18.651 13:33:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:13:18.651 13:33:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:13:18.651 13:33:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:13:18.651 13:33:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:13:18.651 13:33:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:13:18.651 13:33:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:13:18.651 Process raid pid: 62486 00:13:18.651 13:33:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=62486 00:13:18.651 13:33:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:13:18.651 13:33:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 62486' 00:13:18.651 13:33:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 62486 00:13:18.651 13:33:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 62486 ']' 00:13:18.651 13:33:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:18.651 13:33:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:18.651 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:18.651 13:33:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:18.651 13:33:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:18.651 13:33:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.651 [2024-11-20 13:33:18.022454] Starting SPDK v25.01-pre git sha1 82b85d9ca / DPDK 24.03.0 initialization... 00:13:18.651 [2024-11-20 13:33:18.022780] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:18.910 [2024-11-20 13:33:18.201995] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:18.910 [2024-11-20 13:33:18.321902] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:19.169 [2024-11-20 13:33:18.521282] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:19.169 [2024-11-20 13:33:18.521324] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:19.429 13:33:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:19.429 13:33:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:13:19.429 13:33:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:13:19.429 13:33:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.429 13:33:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.429 [2024-11-20 13:33:18.889493] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:19.429 [2024-11-20 13:33:18.889549] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:19.429 [2024-11-20 13:33:18.889562] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:19.429 [2024-11-20 13:33:18.889574] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:19.429 13:33:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.429 13:33:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:13:19.429 13:33:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:19.429 13:33:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:19.429 13:33:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:19.429 13:33:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:19.429 13:33:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:19.429 13:33:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:19.429 13:33:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:19.429 13:33:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:19.429 13:33:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:19.429 13:33:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:19.429 13:33:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:19.429 13:33:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.429 13:33:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.688 13:33:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.688 13:33:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:19.688 "name": "Existed_Raid", 00:13:19.689 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:19.689 "strip_size_kb": 0, 00:13:19.689 "state": "configuring", 00:13:19.689 "raid_level": "raid1", 00:13:19.689 "superblock": false, 00:13:19.689 "num_base_bdevs": 2, 00:13:19.689 "num_base_bdevs_discovered": 0, 00:13:19.689 "num_base_bdevs_operational": 2, 00:13:19.689 "base_bdevs_list": [ 00:13:19.689 { 00:13:19.689 "name": "BaseBdev1", 00:13:19.689 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:19.689 "is_configured": false, 00:13:19.689 "data_offset": 0, 00:13:19.689 "data_size": 0 00:13:19.689 }, 00:13:19.689 { 00:13:19.689 "name": "BaseBdev2", 00:13:19.689 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:19.689 "is_configured": false, 00:13:19.689 "data_offset": 0, 00:13:19.689 "data_size": 0 00:13:19.689 } 00:13:19.689 ] 00:13:19.689 }' 00:13:19.689 13:33:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:19.689 13:33:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.947 13:33:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:19.947 13:33:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.947 13:33:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.947 [2024-11-20 13:33:19.292963] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:19.947 [2024-11-20 13:33:19.292999] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:13:19.947 13:33:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.947 13:33:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:13:19.947 13:33:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.947 13:33:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.947 [2024-11-20 13:33:19.300934] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:19.947 [2024-11-20 13:33:19.300983] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:19.947 [2024-11-20 13:33:19.300995] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:19.947 [2024-11-20 13:33:19.301010] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:19.947 13:33:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.947 13:33:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:19.947 13:33:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.947 13:33:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.947 [2024-11-20 13:33:19.346636] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:19.947 BaseBdev1 00:13:19.947 13:33:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.947 13:33:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:13:19.947 13:33:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:13:19.947 13:33:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:19.947 13:33:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:19.947 13:33:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:19.947 13:33:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:19.947 13:33:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:19.947 13:33:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.947 13:33:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.947 13:33:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.947 13:33:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:19.947 13:33:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.947 13:33:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.947 [ 00:13:19.947 { 00:13:19.947 "name": "BaseBdev1", 00:13:19.947 "aliases": [ 00:13:19.947 "ae52346c-beb0-4006-afdf-15297f7b60ce" 00:13:19.947 ], 00:13:19.947 "product_name": "Malloc disk", 00:13:19.947 "block_size": 512, 00:13:19.947 "num_blocks": 65536, 00:13:19.947 "uuid": "ae52346c-beb0-4006-afdf-15297f7b60ce", 00:13:19.947 "assigned_rate_limits": { 00:13:19.947 "rw_ios_per_sec": 0, 00:13:19.947 "rw_mbytes_per_sec": 0, 00:13:19.947 "r_mbytes_per_sec": 0, 00:13:19.947 "w_mbytes_per_sec": 0 00:13:19.947 }, 00:13:19.947 "claimed": true, 00:13:19.947 "claim_type": "exclusive_write", 00:13:19.947 "zoned": false, 00:13:19.947 "supported_io_types": { 00:13:19.947 "read": true, 00:13:19.947 "write": true, 00:13:19.947 "unmap": true, 00:13:19.947 "flush": true, 00:13:19.947 "reset": true, 00:13:19.947 "nvme_admin": false, 00:13:19.947 "nvme_io": false, 00:13:19.947 "nvme_io_md": false, 00:13:19.947 "write_zeroes": true, 00:13:19.947 "zcopy": true, 00:13:19.947 "get_zone_info": false, 00:13:19.947 "zone_management": false, 00:13:19.947 "zone_append": false, 00:13:19.947 "compare": false, 00:13:19.947 "compare_and_write": false, 00:13:19.947 "abort": true, 00:13:19.947 "seek_hole": false, 00:13:19.947 "seek_data": false, 00:13:19.947 "copy": true, 00:13:19.947 "nvme_iov_md": false 00:13:19.947 }, 00:13:19.947 "memory_domains": [ 00:13:19.947 { 00:13:19.947 "dma_device_id": "system", 00:13:19.947 "dma_device_type": 1 00:13:19.947 }, 00:13:19.947 { 00:13:19.947 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:19.947 "dma_device_type": 2 00:13:19.947 } 00:13:19.947 ], 00:13:19.947 "driver_specific": {} 00:13:19.947 } 00:13:19.947 ] 00:13:19.947 13:33:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.947 13:33:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:19.947 13:33:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:13:19.947 13:33:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:19.947 13:33:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:19.947 13:33:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:19.947 13:33:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:19.947 13:33:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:19.947 13:33:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:19.947 13:33:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:19.947 13:33:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:19.947 13:33:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:19.947 13:33:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:19.947 13:33:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:19.947 13:33:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.947 13:33:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.947 13:33:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.947 13:33:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:19.947 "name": "Existed_Raid", 00:13:19.947 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:19.947 "strip_size_kb": 0, 00:13:19.947 "state": "configuring", 00:13:19.947 "raid_level": "raid1", 00:13:19.947 "superblock": false, 00:13:19.947 "num_base_bdevs": 2, 00:13:19.947 "num_base_bdevs_discovered": 1, 00:13:19.947 "num_base_bdevs_operational": 2, 00:13:19.947 "base_bdevs_list": [ 00:13:19.947 { 00:13:19.947 "name": "BaseBdev1", 00:13:19.947 "uuid": "ae52346c-beb0-4006-afdf-15297f7b60ce", 00:13:19.947 "is_configured": true, 00:13:19.947 "data_offset": 0, 00:13:19.947 "data_size": 65536 00:13:19.947 }, 00:13:19.947 { 00:13:19.947 "name": "BaseBdev2", 00:13:19.947 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:19.947 "is_configured": false, 00:13:19.947 "data_offset": 0, 00:13:19.947 "data_size": 0 00:13:19.947 } 00:13:19.947 ] 00:13:19.947 }' 00:13:19.947 13:33:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:19.947 13:33:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:20.516 13:33:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:20.516 13:33:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.516 13:33:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:20.516 [2024-11-20 13:33:19.778205] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:20.516 [2024-11-20 13:33:19.778389] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:13:20.516 13:33:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.516 13:33:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:13:20.516 13:33:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.516 13:33:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:20.516 [2024-11-20 13:33:19.786251] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:20.516 [2024-11-20 13:33:19.788336] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:20.516 [2024-11-20 13:33:19.788385] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:20.516 13:33:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.516 13:33:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:13:20.516 13:33:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:20.516 13:33:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:13:20.516 13:33:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:20.516 13:33:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:20.516 13:33:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:20.516 13:33:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:20.516 13:33:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:20.516 13:33:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:20.516 13:33:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:20.516 13:33:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:20.516 13:33:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:20.516 13:33:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:20.516 13:33:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:20.516 13:33:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.516 13:33:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:20.516 13:33:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.516 13:33:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:20.516 "name": "Existed_Raid", 00:13:20.516 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:20.516 "strip_size_kb": 0, 00:13:20.516 "state": "configuring", 00:13:20.516 "raid_level": "raid1", 00:13:20.516 "superblock": false, 00:13:20.516 "num_base_bdevs": 2, 00:13:20.516 "num_base_bdevs_discovered": 1, 00:13:20.516 "num_base_bdevs_operational": 2, 00:13:20.516 "base_bdevs_list": [ 00:13:20.516 { 00:13:20.516 "name": "BaseBdev1", 00:13:20.516 "uuid": "ae52346c-beb0-4006-afdf-15297f7b60ce", 00:13:20.516 "is_configured": true, 00:13:20.516 "data_offset": 0, 00:13:20.516 "data_size": 65536 00:13:20.516 }, 00:13:20.516 { 00:13:20.516 "name": "BaseBdev2", 00:13:20.516 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:20.516 "is_configured": false, 00:13:20.516 "data_offset": 0, 00:13:20.516 "data_size": 0 00:13:20.516 } 00:13:20.516 ] 00:13:20.516 }' 00:13:20.516 13:33:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:20.516 13:33:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:20.775 13:33:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:20.775 13:33:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.775 13:33:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:20.775 [2024-11-20 13:33:20.259036] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:20.775 [2024-11-20 13:33:20.259137] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:13:20.775 [2024-11-20 13:33:20.259149] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:13:20.775 [2024-11-20 13:33:20.259504] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:13:21.034 [2024-11-20 13:33:20.259704] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:13:21.034 [2024-11-20 13:33:20.259726] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:13:21.034 [2024-11-20 13:33:20.260012] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:21.034 BaseBdev2 00:13:21.034 13:33:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.034 13:33:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:13:21.034 13:33:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:13:21.034 13:33:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:21.034 13:33:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:21.034 13:33:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:21.034 13:33:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:21.034 13:33:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:21.034 13:33:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.034 13:33:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:21.034 13:33:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.034 13:33:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:21.034 13:33:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.034 13:33:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:21.034 [ 00:13:21.034 { 00:13:21.034 "name": "BaseBdev2", 00:13:21.034 "aliases": [ 00:13:21.034 "839724da-a2a2-4bee-94c8-9e28ad9e3db8" 00:13:21.034 ], 00:13:21.034 "product_name": "Malloc disk", 00:13:21.034 "block_size": 512, 00:13:21.034 "num_blocks": 65536, 00:13:21.034 "uuid": "839724da-a2a2-4bee-94c8-9e28ad9e3db8", 00:13:21.034 "assigned_rate_limits": { 00:13:21.034 "rw_ios_per_sec": 0, 00:13:21.034 "rw_mbytes_per_sec": 0, 00:13:21.034 "r_mbytes_per_sec": 0, 00:13:21.034 "w_mbytes_per_sec": 0 00:13:21.034 }, 00:13:21.034 "claimed": true, 00:13:21.034 "claim_type": "exclusive_write", 00:13:21.034 "zoned": false, 00:13:21.034 "supported_io_types": { 00:13:21.034 "read": true, 00:13:21.034 "write": true, 00:13:21.034 "unmap": true, 00:13:21.034 "flush": true, 00:13:21.034 "reset": true, 00:13:21.034 "nvme_admin": false, 00:13:21.034 "nvme_io": false, 00:13:21.034 "nvme_io_md": false, 00:13:21.035 "write_zeroes": true, 00:13:21.035 "zcopy": true, 00:13:21.035 "get_zone_info": false, 00:13:21.035 "zone_management": false, 00:13:21.035 "zone_append": false, 00:13:21.035 "compare": false, 00:13:21.035 "compare_and_write": false, 00:13:21.035 "abort": true, 00:13:21.035 "seek_hole": false, 00:13:21.035 "seek_data": false, 00:13:21.035 "copy": true, 00:13:21.035 "nvme_iov_md": false 00:13:21.035 }, 00:13:21.035 "memory_domains": [ 00:13:21.035 { 00:13:21.035 "dma_device_id": "system", 00:13:21.035 "dma_device_type": 1 00:13:21.035 }, 00:13:21.035 { 00:13:21.035 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:21.035 "dma_device_type": 2 00:13:21.035 } 00:13:21.035 ], 00:13:21.035 "driver_specific": {} 00:13:21.035 } 00:13:21.035 ] 00:13:21.035 13:33:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.035 13:33:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:21.035 13:33:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:21.035 13:33:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:21.035 13:33:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:13:21.035 13:33:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:21.035 13:33:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:21.035 13:33:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:21.035 13:33:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:21.035 13:33:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:21.035 13:33:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:21.035 13:33:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:21.035 13:33:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:21.035 13:33:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:21.035 13:33:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:21.035 13:33:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:21.035 13:33:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.035 13:33:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:21.035 13:33:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.035 13:33:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:21.035 "name": "Existed_Raid", 00:13:21.035 "uuid": "dc30dcc6-3c08-4a87-a884-c7de3f14cfa7", 00:13:21.035 "strip_size_kb": 0, 00:13:21.035 "state": "online", 00:13:21.035 "raid_level": "raid1", 00:13:21.035 "superblock": false, 00:13:21.035 "num_base_bdevs": 2, 00:13:21.035 "num_base_bdevs_discovered": 2, 00:13:21.035 "num_base_bdevs_operational": 2, 00:13:21.035 "base_bdevs_list": [ 00:13:21.035 { 00:13:21.035 "name": "BaseBdev1", 00:13:21.035 "uuid": "ae52346c-beb0-4006-afdf-15297f7b60ce", 00:13:21.035 "is_configured": true, 00:13:21.035 "data_offset": 0, 00:13:21.035 "data_size": 65536 00:13:21.035 }, 00:13:21.035 { 00:13:21.035 "name": "BaseBdev2", 00:13:21.035 "uuid": "839724da-a2a2-4bee-94c8-9e28ad9e3db8", 00:13:21.035 "is_configured": true, 00:13:21.035 "data_offset": 0, 00:13:21.035 "data_size": 65536 00:13:21.035 } 00:13:21.035 ] 00:13:21.035 }' 00:13:21.035 13:33:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:21.035 13:33:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:21.294 13:33:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:13:21.294 13:33:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:21.294 13:33:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:21.294 13:33:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:21.294 13:33:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:21.294 13:33:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:21.294 13:33:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:21.294 13:33:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.294 13:33:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:21.294 13:33:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:21.294 [2024-11-20 13:33:20.774693] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:21.553 13:33:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.553 13:33:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:21.553 "name": "Existed_Raid", 00:13:21.553 "aliases": [ 00:13:21.553 "dc30dcc6-3c08-4a87-a884-c7de3f14cfa7" 00:13:21.553 ], 00:13:21.553 "product_name": "Raid Volume", 00:13:21.553 "block_size": 512, 00:13:21.553 "num_blocks": 65536, 00:13:21.553 "uuid": "dc30dcc6-3c08-4a87-a884-c7de3f14cfa7", 00:13:21.553 "assigned_rate_limits": { 00:13:21.553 "rw_ios_per_sec": 0, 00:13:21.553 "rw_mbytes_per_sec": 0, 00:13:21.553 "r_mbytes_per_sec": 0, 00:13:21.553 "w_mbytes_per_sec": 0 00:13:21.553 }, 00:13:21.553 "claimed": false, 00:13:21.553 "zoned": false, 00:13:21.553 "supported_io_types": { 00:13:21.553 "read": true, 00:13:21.553 "write": true, 00:13:21.553 "unmap": false, 00:13:21.553 "flush": false, 00:13:21.553 "reset": true, 00:13:21.553 "nvme_admin": false, 00:13:21.553 "nvme_io": false, 00:13:21.553 "nvme_io_md": false, 00:13:21.553 "write_zeroes": true, 00:13:21.553 "zcopy": false, 00:13:21.553 "get_zone_info": false, 00:13:21.553 "zone_management": false, 00:13:21.553 "zone_append": false, 00:13:21.553 "compare": false, 00:13:21.553 "compare_and_write": false, 00:13:21.553 "abort": false, 00:13:21.553 "seek_hole": false, 00:13:21.553 "seek_data": false, 00:13:21.553 "copy": false, 00:13:21.553 "nvme_iov_md": false 00:13:21.553 }, 00:13:21.553 "memory_domains": [ 00:13:21.553 { 00:13:21.553 "dma_device_id": "system", 00:13:21.553 "dma_device_type": 1 00:13:21.553 }, 00:13:21.553 { 00:13:21.553 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:21.553 "dma_device_type": 2 00:13:21.553 }, 00:13:21.553 { 00:13:21.553 "dma_device_id": "system", 00:13:21.553 "dma_device_type": 1 00:13:21.553 }, 00:13:21.553 { 00:13:21.553 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:21.553 "dma_device_type": 2 00:13:21.553 } 00:13:21.553 ], 00:13:21.553 "driver_specific": { 00:13:21.553 "raid": { 00:13:21.553 "uuid": "dc30dcc6-3c08-4a87-a884-c7de3f14cfa7", 00:13:21.553 "strip_size_kb": 0, 00:13:21.553 "state": "online", 00:13:21.553 "raid_level": "raid1", 00:13:21.553 "superblock": false, 00:13:21.553 "num_base_bdevs": 2, 00:13:21.553 "num_base_bdevs_discovered": 2, 00:13:21.553 "num_base_bdevs_operational": 2, 00:13:21.553 "base_bdevs_list": [ 00:13:21.553 { 00:13:21.553 "name": "BaseBdev1", 00:13:21.553 "uuid": "ae52346c-beb0-4006-afdf-15297f7b60ce", 00:13:21.553 "is_configured": true, 00:13:21.553 "data_offset": 0, 00:13:21.553 "data_size": 65536 00:13:21.553 }, 00:13:21.553 { 00:13:21.553 "name": "BaseBdev2", 00:13:21.553 "uuid": "839724da-a2a2-4bee-94c8-9e28ad9e3db8", 00:13:21.553 "is_configured": true, 00:13:21.553 "data_offset": 0, 00:13:21.553 "data_size": 65536 00:13:21.553 } 00:13:21.553 ] 00:13:21.553 } 00:13:21.553 } 00:13:21.553 }' 00:13:21.553 13:33:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:21.553 13:33:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:13:21.553 BaseBdev2' 00:13:21.553 13:33:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:21.553 13:33:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:21.553 13:33:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:21.553 13:33:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:13:21.553 13:33:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:21.553 13:33:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.553 13:33:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:21.553 13:33:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.553 13:33:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:21.553 13:33:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:21.553 13:33:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:21.553 13:33:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:21.553 13:33:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:21.553 13:33:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.553 13:33:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:21.553 13:33:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.553 13:33:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:21.553 13:33:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:21.553 13:33:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:21.553 13:33:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.553 13:33:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:21.553 [2024-11-20 13:33:20.998456] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:21.812 13:33:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.812 13:33:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:13:21.812 13:33:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:13:21.812 13:33:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:21.812 13:33:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:13:21.812 13:33:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:13:21.812 13:33:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:13:21.812 13:33:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:21.812 13:33:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:21.812 13:33:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:21.812 13:33:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:21.812 13:33:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:21.812 13:33:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:21.812 13:33:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:21.812 13:33:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:21.812 13:33:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:21.812 13:33:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:21.812 13:33:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:21.812 13:33:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.812 13:33:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:21.812 13:33:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.812 13:33:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:21.812 "name": "Existed_Raid", 00:13:21.812 "uuid": "dc30dcc6-3c08-4a87-a884-c7de3f14cfa7", 00:13:21.812 "strip_size_kb": 0, 00:13:21.812 "state": "online", 00:13:21.812 "raid_level": "raid1", 00:13:21.812 "superblock": false, 00:13:21.812 "num_base_bdevs": 2, 00:13:21.812 "num_base_bdevs_discovered": 1, 00:13:21.812 "num_base_bdevs_operational": 1, 00:13:21.812 "base_bdevs_list": [ 00:13:21.812 { 00:13:21.812 "name": null, 00:13:21.812 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:21.812 "is_configured": false, 00:13:21.812 "data_offset": 0, 00:13:21.812 "data_size": 65536 00:13:21.812 }, 00:13:21.812 { 00:13:21.812 "name": "BaseBdev2", 00:13:21.812 "uuid": "839724da-a2a2-4bee-94c8-9e28ad9e3db8", 00:13:21.812 "is_configured": true, 00:13:21.812 "data_offset": 0, 00:13:21.812 "data_size": 65536 00:13:21.812 } 00:13:21.812 ] 00:13:21.812 }' 00:13:21.812 13:33:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:21.812 13:33:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.096 13:33:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:13:22.096 13:33:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:22.096 13:33:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:22.096 13:33:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.096 13:33:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.096 13:33:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:22.096 13:33:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.096 13:33:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:22.096 13:33:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:22.096 13:33:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:13:22.096 13:33:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.096 13:33:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.355 [2024-11-20 13:33:21.584258] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:22.355 [2024-11-20 13:33:21.584364] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:22.355 [2024-11-20 13:33:21.679098] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:22.355 13:33:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.355 [2024-11-20 13:33:21.680472] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:22.355 [2024-11-20 13:33:21.680506] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:13:22.355 13:33:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:22.355 13:33:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:22.355 13:33:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:13:22.355 13:33:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:22.355 13:33:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.355 13:33:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.355 13:33:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.355 13:33:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:13:22.355 13:33:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:13:22.355 13:33:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:13:22.355 13:33:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 62486 00:13:22.355 13:33:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 62486 ']' 00:13:22.355 13:33:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 62486 00:13:22.355 13:33:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:13:22.355 13:33:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:22.355 13:33:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62486 00:13:22.355 killing process with pid 62486 00:13:22.355 13:33:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:22.355 13:33:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:22.355 13:33:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62486' 00:13:22.355 13:33:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 62486 00:13:22.355 [2024-11-20 13:33:21.772671] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:22.355 13:33:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 62486 00:13:22.355 [2024-11-20 13:33:21.790177] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:23.731 13:33:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:13:23.731 00:13:23.731 real 0m5.010s 00:13:23.731 user 0m7.180s 00:13:23.731 sys 0m0.908s 00:13:23.731 ************************************ 00:13:23.731 END TEST raid_state_function_test 00:13:23.731 ************************************ 00:13:23.731 13:33:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:23.731 13:33:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:23.731 13:33:22 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 2 true 00:13:23.731 13:33:22 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:13:23.731 13:33:22 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:23.731 13:33:22 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:23.731 ************************************ 00:13:23.731 START TEST raid_state_function_test_sb 00:13:23.731 ************************************ 00:13:23.731 13:33:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:13:23.731 13:33:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:13:23.731 13:33:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:13:23.731 13:33:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:13:23.731 13:33:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:13:23.731 13:33:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:13:23.731 13:33:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:23.731 13:33:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:13:23.731 13:33:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:23.731 13:33:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:23.731 13:33:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:13:23.731 13:33:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:23.731 13:33:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:23.731 13:33:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:13:23.731 13:33:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:13:23.731 13:33:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:13:23.731 13:33:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:13:23.731 13:33:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:13:23.731 13:33:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:13:23.731 13:33:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:13:23.731 13:33:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:13:23.731 13:33:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:13:23.731 13:33:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:13:23.731 Process raid pid: 62733 00:13:23.731 13:33:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=62733 00:13:23.731 13:33:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 62733' 00:13:23.731 13:33:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 62733 00:13:23.731 13:33:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 62733 ']' 00:13:23.731 13:33:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:23.731 13:33:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:23.731 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:23.731 13:33:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:23.731 13:33:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:23.731 13:33:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:23.731 13:33:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:13:23.731 [2024-11-20 13:33:23.100754] Starting SPDK v25.01-pre git sha1 82b85d9ca / DPDK 24.03.0 initialization... 00:13:23.731 [2024-11-20 13:33:23.100887] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:23.990 [2024-11-20 13:33:23.282764] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:23.990 [2024-11-20 13:33:23.397712] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:24.249 [2024-11-20 13:33:23.601737] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:24.249 [2024-11-20 13:33:23.601782] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:24.508 13:33:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:24.508 13:33:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:13:24.508 13:33:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:13:24.508 13:33:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.508 13:33:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:24.508 [2024-11-20 13:33:23.947719] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:24.508 [2024-11-20 13:33:23.947774] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:24.508 [2024-11-20 13:33:23.947786] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:24.508 [2024-11-20 13:33:23.947799] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:24.508 13:33:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.508 13:33:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:13:24.508 13:33:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:24.508 13:33:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:24.508 13:33:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:24.508 13:33:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:24.508 13:33:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:24.508 13:33:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:24.508 13:33:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:24.508 13:33:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:24.508 13:33:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:24.508 13:33:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:24.508 13:33:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:24.508 13:33:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.508 13:33:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:24.508 13:33:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.766 13:33:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:24.766 "name": "Existed_Raid", 00:13:24.766 "uuid": "4d4d4ecd-98d4-4386-b130-b1aa75748856", 00:13:24.766 "strip_size_kb": 0, 00:13:24.766 "state": "configuring", 00:13:24.766 "raid_level": "raid1", 00:13:24.766 "superblock": true, 00:13:24.766 "num_base_bdevs": 2, 00:13:24.766 "num_base_bdevs_discovered": 0, 00:13:24.766 "num_base_bdevs_operational": 2, 00:13:24.766 "base_bdevs_list": [ 00:13:24.766 { 00:13:24.766 "name": "BaseBdev1", 00:13:24.766 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:24.766 "is_configured": false, 00:13:24.766 "data_offset": 0, 00:13:24.766 "data_size": 0 00:13:24.766 }, 00:13:24.766 { 00:13:24.766 "name": "BaseBdev2", 00:13:24.766 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:24.766 "is_configured": false, 00:13:24.766 "data_offset": 0, 00:13:24.766 "data_size": 0 00:13:24.766 } 00:13:24.766 ] 00:13:24.766 }' 00:13:24.766 13:33:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:24.766 13:33:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:25.025 13:33:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:25.025 13:33:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.025 13:33:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:25.025 [2024-11-20 13:33:24.319219] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:25.025 [2024-11-20 13:33:24.319253] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:13:25.025 13:33:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.025 13:33:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:13:25.025 13:33:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.025 13:33:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:25.025 [2024-11-20 13:33:24.327204] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:25.025 [2024-11-20 13:33:24.327249] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:25.025 [2024-11-20 13:33:24.327259] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:25.025 [2024-11-20 13:33:24.327274] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:25.025 13:33:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.025 13:33:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:25.025 13:33:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.025 13:33:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:25.025 [2024-11-20 13:33:24.373419] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:25.025 BaseBdev1 00:13:25.025 13:33:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.025 13:33:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:13:25.025 13:33:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:13:25.025 13:33:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:25.025 13:33:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:25.025 13:33:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:25.025 13:33:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:25.025 13:33:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:25.025 13:33:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.025 13:33:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:25.025 13:33:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.025 13:33:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:25.025 13:33:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.025 13:33:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:25.025 [ 00:13:25.025 { 00:13:25.025 "name": "BaseBdev1", 00:13:25.025 "aliases": [ 00:13:25.025 "9a56d8e9-61ba-4be8-9bcc-3edeffeed8df" 00:13:25.025 ], 00:13:25.025 "product_name": "Malloc disk", 00:13:25.025 "block_size": 512, 00:13:25.025 "num_blocks": 65536, 00:13:25.025 "uuid": "9a56d8e9-61ba-4be8-9bcc-3edeffeed8df", 00:13:25.025 "assigned_rate_limits": { 00:13:25.025 "rw_ios_per_sec": 0, 00:13:25.025 "rw_mbytes_per_sec": 0, 00:13:25.025 "r_mbytes_per_sec": 0, 00:13:25.025 "w_mbytes_per_sec": 0 00:13:25.025 }, 00:13:25.025 "claimed": true, 00:13:25.025 "claim_type": "exclusive_write", 00:13:25.025 "zoned": false, 00:13:25.025 "supported_io_types": { 00:13:25.025 "read": true, 00:13:25.025 "write": true, 00:13:25.025 "unmap": true, 00:13:25.025 "flush": true, 00:13:25.025 "reset": true, 00:13:25.025 "nvme_admin": false, 00:13:25.025 "nvme_io": false, 00:13:25.025 "nvme_io_md": false, 00:13:25.025 "write_zeroes": true, 00:13:25.025 "zcopy": true, 00:13:25.025 "get_zone_info": false, 00:13:25.025 "zone_management": false, 00:13:25.025 "zone_append": false, 00:13:25.025 "compare": false, 00:13:25.025 "compare_and_write": false, 00:13:25.025 "abort": true, 00:13:25.025 "seek_hole": false, 00:13:25.025 "seek_data": false, 00:13:25.025 "copy": true, 00:13:25.025 "nvme_iov_md": false 00:13:25.025 }, 00:13:25.025 "memory_domains": [ 00:13:25.025 { 00:13:25.025 "dma_device_id": "system", 00:13:25.025 "dma_device_type": 1 00:13:25.025 }, 00:13:25.025 { 00:13:25.025 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:25.025 "dma_device_type": 2 00:13:25.025 } 00:13:25.025 ], 00:13:25.025 "driver_specific": {} 00:13:25.025 } 00:13:25.025 ] 00:13:25.025 13:33:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.025 13:33:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:25.025 13:33:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:13:25.025 13:33:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:25.025 13:33:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:25.025 13:33:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:25.025 13:33:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:25.025 13:33:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:25.025 13:33:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:25.025 13:33:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:25.025 13:33:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:25.025 13:33:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:25.025 13:33:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:25.025 13:33:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:25.025 13:33:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.025 13:33:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:25.025 13:33:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.025 13:33:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:25.025 "name": "Existed_Raid", 00:13:25.025 "uuid": "9071057b-67e5-4941-9eef-7361223a9794", 00:13:25.025 "strip_size_kb": 0, 00:13:25.025 "state": "configuring", 00:13:25.025 "raid_level": "raid1", 00:13:25.025 "superblock": true, 00:13:25.025 "num_base_bdevs": 2, 00:13:25.025 "num_base_bdevs_discovered": 1, 00:13:25.025 "num_base_bdevs_operational": 2, 00:13:25.025 "base_bdevs_list": [ 00:13:25.025 { 00:13:25.025 "name": "BaseBdev1", 00:13:25.025 "uuid": "9a56d8e9-61ba-4be8-9bcc-3edeffeed8df", 00:13:25.025 "is_configured": true, 00:13:25.025 "data_offset": 2048, 00:13:25.025 "data_size": 63488 00:13:25.026 }, 00:13:25.026 { 00:13:25.026 "name": "BaseBdev2", 00:13:25.026 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:25.026 "is_configured": false, 00:13:25.026 "data_offset": 0, 00:13:25.026 "data_size": 0 00:13:25.026 } 00:13:25.026 ] 00:13:25.026 }' 00:13:25.026 13:33:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:25.026 13:33:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:25.593 13:33:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:25.593 13:33:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.593 13:33:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:25.593 [2024-11-20 13:33:24.797200] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:25.593 [2024-11-20 13:33:24.797251] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:13:25.593 13:33:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.593 13:33:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:13:25.593 13:33:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.593 13:33:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:25.593 [2024-11-20 13:33:24.805237] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:25.593 [2024-11-20 13:33:24.807383] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:25.593 [2024-11-20 13:33:24.807556] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:25.593 13:33:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.593 13:33:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:13:25.593 13:33:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:25.593 13:33:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:13:25.593 13:33:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:25.593 13:33:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:25.593 13:33:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:25.593 13:33:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:25.593 13:33:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:25.593 13:33:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:25.593 13:33:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:25.593 13:33:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:25.593 13:33:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:25.593 13:33:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:25.593 13:33:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:25.593 13:33:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.593 13:33:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:25.593 13:33:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.593 13:33:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:25.593 "name": "Existed_Raid", 00:13:25.593 "uuid": "6db2e90e-bc37-4180-9404-08bbe7fac107", 00:13:25.593 "strip_size_kb": 0, 00:13:25.593 "state": "configuring", 00:13:25.593 "raid_level": "raid1", 00:13:25.593 "superblock": true, 00:13:25.593 "num_base_bdevs": 2, 00:13:25.593 "num_base_bdevs_discovered": 1, 00:13:25.593 "num_base_bdevs_operational": 2, 00:13:25.593 "base_bdevs_list": [ 00:13:25.593 { 00:13:25.593 "name": "BaseBdev1", 00:13:25.593 "uuid": "9a56d8e9-61ba-4be8-9bcc-3edeffeed8df", 00:13:25.593 "is_configured": true, 00:13:25.593 "data_offset": 2048, 00:13:25.593 "data_size": 63488 00:13:25.593 }, 00:13:25.593 { 00:13:25.593 "name": "BaseBdev2", 00:13:25.593 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:25.593 "is_configured": false, 00:13:25.593 "data_offset": 0, 00:13:25.593 "data_size": 0 00:13:25.593 } 00:13:25.593 ] 00:13:25.593 }' 00:13:25.593 13:33:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:25.593 13:33:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:25.853 13:33:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:25.853 13:33:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.853 13:33:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:25.853 [2024-11-20 13:33:25.224166] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:25.853 [2024-11-20 13:33:25.224410] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:13:25.853 [2024-11-20 13:33:25.224426] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:25.853 BaseBdev2 00:13:25.853 [2024-11-20 13:33:25.224686] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:13:25.853 [2024-11-20 13:33:25.224848] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:13:25.853 [2024-11-20 13:33:25.224870] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:13:25.853 13:33:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.853 [2024-11-20 13:33:25.225006] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:25.853 13:33:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:13:25.853 13:33:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:13:25.853 13:33:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:25.853 13:33:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:25.853 13:33:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:25.853 13:33:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:25.853 13:33:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:25.853 13:33:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.853 13:33:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:25.853 13:33:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.853 13:33:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:25.853 13:33:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.853 13:33:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:25.853 [ 00:13:25.853 { 00:13:25.853 "name": "BaseBdev2", 00:13:25.853 "aliases": [ 00:13:25.853 "3f4119cd-39af-45d2-a050-9220da870927" 00:13:25.853 ], 00:13:25.853 "product_name": "Malloc disk", 00:13:25.853 "block_size": 512, 00:13:25.853 "num_blocks": 65536, 00:13:25.853 "uuid": "3f4119cd-39af-45d2-a050-9220da870927", 00:13:25.853 "assigned_rate_limits": { 00:13:25.853 "rw_ios_per_sec": 0, 00:13:25.853 "rw_mbytes_per_sec": 0, 00:13:25.853 "r_mbytes_per_sec": 0, 00:13:25.853 "w_mbytes_per_sec": 0 00:13:25.853 }, 00:13:25.853 "claimed": true, 00:13:25.853 "claim_type": "exclusive_write", 00:13:25.853 "zoned": false, 00:13:25.853 "supported_io_types": { 00:13:25.853 "read": true, 00:13:25.853 "write": true, 00:13:25.853 "unmap": true, 00:13:25.853 "flush": true, 00:13:25.853 "reset": true, 00:13:25.853 "nvme_admin": false, 00:13:25.853 "nvme_io": false, 00:13:25.853 "nvme_io_md": false, 00:13:25.853 "write_zeroes": true, 00:13:25.853 "zcopy": true, 00:13:25.853 "get_zone_info": false, 00:13:25.853 "zone_management": false, 00:13:25.853 "zone_append": false, 00:13:25.853 "compare": false, 00:13:25.853 "compare_and_write": false, 00:13:25.853 "abort": true, 00:13:25.853 "seek_hole": false, 00:13:25.853 "seek_data": false, 00:13:25.853 "copy": true, 00:13:25.853 "nvme_iov_md": false 00:13:25.853 }, 00:13:25.853 "memory_domains": [ 00:13:25.853 { 00:13:25.853 "dma_device_id": "system", 00:13:25.853 "dma_device_type": 1 00:13:25.853 }, 00:13:25.853 { 00:13:25.853 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:25.853 "dma_device_type": 2 00:13:25.853 } 00:13:25.853 ], 00:13:25.853 "driver_specific": {} 00:13:25.853 } 00:13:25.853 ] 00:13:25.853 13:33:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.853 13:33:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:25.853 13:33:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:25.853 13:33:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:25.853 13:33:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:13:25.853 13:33:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:25.853 13:33:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:25.853 13:33:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:25.853 13:33:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:25.853 13:33:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:25.853 13:33:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:25.853 13:33:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:25.853 13:33:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:25.853 13:33:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:25.853 13:33:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:25.853 13:33:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:25.853 13:33:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.853 13:33:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:25.853 13:33:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.853 13:33:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:25.853 "name": "Existed_Raid", 00:13:25.853 "uuid": "6db2e90e-bc37-4180-9404-08bbe7fac107", 00:13:25.853 "strip_size_kb": 0, 00:13:25.853 "state": "online", 00:13:25.853 "raid_level": "raid1", 00:13:25.853 "superblock": true, 00:13:25.853 "num_base_bdevs": 2, 00:13:25.853 "num_base_bdevs_discovered": 2, 00:13:25.853 "num_base_bdevs_operational": 2, 00:13:25.853 "base_bdevs_list": [ 00:13:25.853 { 00:13:25.853 "name": "BaseBdev1", 00:13:25.853 "uuid": "9a56d8e9-61ba-4be8-9bcc-3edeffeed8df", 00:13:25.853 "is_configured": true, 00:13:25.853 "data_offset": 2048, 00:13:25.853 "data_size": 63488 00:13:25.853 }, 00:13:25.853 { 00:13:25.853 "name": "BaseBdev2", 00:13:25.853 "uuid": "3f4119cd-39af-45d2-a050-9220da870927", 00:13:25.853 "is_configured": true, 00:13:25.853 "data_offset": 2048, 00:13:25.853 "data_size": 63488 00:13:25.853 } 00:13:25.853 ] 00:13:25.853 }' 00:13:25.853 13:33:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:25.853 13:33:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:26.441 13:33:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:13:26.441 13:33:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:26.441 13:33:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:26.441 13:33:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:26.441 13:33:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:13:26.441 13:33:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:26.441 13:33:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:26.441 13:33:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.441 13:33:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:26.441 13:33:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:26.441 [2024-11-20 13:33:25.659921] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:26.441 13:33:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.441 13:33:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:26.441 "name": "Existed_Raid", 00:13:26.441 "aliases": [ 00:13:26.441 "6db2e90e-bc37-4180-9404-08bbe7fac107" 00:13:26.441 ], 00:13:26.441 "product_name": "Raid Volume", 00:13:26.441 "block_size": 512, 00:13:26.441 "num_blocks": 63488, 00:13:26.441 "uuid": "6db2e90e-bc37-4180-9404-08bbe7fac107", 00:13:26.441 "assigned_rate_limits": { 00:13:26.441 "rw_ios_per_sec": 0, 00:13:26.441 "rw_mbytes_per_sec": 0, 00:13:26.441 "r_mbytes_per_sec": 0, 00:13:26.441 "w_mbytes_per_sec": 0 00:13:26.441 }, 00:13:26.441 "claimed": false, 00:13:26.441 "zoned": false, 00:13:26.441 "supported_io_types": { 00:13:26.441 "read": true, 00:13:26.441 "write": true, 00:13:26.441 "unmap": false, 00:13:26.441 "flush": false, 00:13:26.441 "reset": true, 00:13:26.441 "nvme_admin": false, 00:13:26.441 "nvme_io": false, 00:13:26.441 "nvme_io_md": false, 00:13:26.441 "write_zeroes": true, 00:13:26.441 "zcopy": false, 00:13:26.441 "get_zone_info": false, 00:13:26.441 "zone_management": false, 00:13:26.441 "zone_append": false, 00:13:26.441 "compare": false, 00:13:26.441 "compare_and_write": false, 00:13:26.441 "abort": false, 00:13:26.441 "seek_hole": false, 00:13:26.441 "seek_data": false, 00:13:26.441 "copy": false, 00:13:26.441 "nvme_iov_md": false 00:13:26.441 }, 00:13:26.441 "memory_domains": [ 00:13:26.441 { 00:13:26.441 "dma_device_id": "system", 00:13:26.441 "dma_device_type": 1 00:13:26.441 }, 00:13:26.441 { 00:13:26.441 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:26.441 "dma_device_type": 2 00:13:26.441 }, 00:13:26.441 { 00:13:26.441 "dma_device_id": "system", 00:13:26.441 "dma_device_type": 1 00:13:26.441 }, 00:13:26.441 { 00:13:26.441 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:26.441 "dma_device_type": 2 00:13:26.441 } 00:13:26.441 ], 00:13:26.441 "driver_specific": { 00:13:26.441 "raid": { 00:13:26.441 "uuid": "6db2e90e-bc37-4180-9404-08bbe7fac107", 00:13:26.441 "strip_size_kb": 0, 00:13:26.441 "state": "online", 00:13:26.441 "raid_level": "raid1", 00:13:26.441 "superblock": true, 00:13:26.441 "num_base_bdevs": 2, 00:13:26.441 "num_base_bdevs_discovered": 2, 00:13:26.441 "num_base_bdevs_operational": 2, 00:13:26.441 "base_bdevs_list": [ 00:13:26.441 { 00:13:26.441 "name": "BaseBdev1", 00:13:26.441 "uuid": "9a56d8e9-61ba-4be8-9bcc-3edeffeed8df", 00:13:26.441 "is_configured": true, 00:13:26.441 "data_offset": 2048, 00:13:26.441 "data_size": 63488 00:13:26.441 }, 00:13:26.441 { 00:13:26.441 "name": "BaseBdev2", 00:13:26.441 "uuid": "3f4119cd-39af-45d2-a050-9220da870927", 00:13:26.441 "is_configured": true, 00:13:26.441 "data_offset": 2048, 00:13:26.441 "data_size": 63488 00:13:26.441 } 00:13:26.441 ] 00:13:26.441 } 00:13:26.441 } 00:13:26.441 }' 00:13:26.441 13:33:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:26.441 13:33:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:13:26.441 BaseBdev2' 00:13:26.441 13:33:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:26.441 13:33:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:26.441 13:33:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:26.441 13:33:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:26.441 13:33:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:13:26.441 13:33:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.441 13:33:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:26.441 13:33:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.441 13:33:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:26.441 13:33:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:26.441 13:33:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:26.441 13:33:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:26.441 13:33:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.441 13:33:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:26.441 13:33:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:26.441 13:33:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.441 13:33:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:26.441 13:33:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:26.441 13:33:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:26.441 13:33:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.441 13:33:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:26.441 [2024-11-20 13:33:25.883384] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:26.701 13:33:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.701 13:33:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:13:26.701 13:33:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:13:26.701 13:33:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:26.701 13:33:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:13:26.701 13:33:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:13:26.701 13:33:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:13:26.701 13:33:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:26.701 13:33:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:26.701 13:33:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:26.701 13:33:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:26.701 13:33:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:26.701 13:33:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:26.701 13:33:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:26.701 13:33:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:26.701 13:33:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:26.701 13:33:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:26.701 13:33:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.701 13:33:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:26.701 13:33:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:26.701 13:33:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.701 13:33:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:26.701 "name": "Existed_Raid", 00:13:26.701 "uuid": "6db2e90e-bc37-4180-9404-08bbe7fac107", 00:13:26.701 "strip_size_kb": 0, 00:13:26.701 "state": "online", 00:13:26.701 "raid_level": "raid1", 00:13:26.701 "superblock": true, 00:13:26.701 "num_base_bdevs": 2, 00:13:26.701 "num_base_bdevs_discovered": 1, 00:13:26.701 "num_base_bdevs_operational": 1, 00:13:26.701 "base_bdevs_list": [ 00:13:26.701 { 00:13:26.701 "name": null, 00:13:26.701 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:26.701 "is_configured": false, 00:13:26.701 "data_offset": 0, 00:13:26.701 "data_size": 63488 00:13:26.701 }, 00:13:26.701 { 00:13:26.701 "name": "BaseBdev2", 00:13:26.701 "uuid": "3f4119cd-39af-45d2-a050-9220da870927", 00:13:26.701 "is_configured": true, 00:13:26.701 "data_offset": 2048, 00:13:26.701 "data_size": 63488 00:13:26.701 } 00:13:26.701 ] 00:13:26.701 }' 00:13:26.701 13:33:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:26.701 13:33:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:26.960 13:33:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:13:26.960 13:33:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:26.960 13:33:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:26.960 13:33:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:26.960 13:33:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.960 13:33:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:26.960 13:33:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.960 13:33:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:26.960 13:33:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:26.960 13:33:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:13:26.960 13:33:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.960 13:33:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:26.960 [2024-11-20 13:33:26.428721] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:26.960 [2024-11-20 13:33:26.428822] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:27.220 [2024-11-20 13:33:26.526030] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:27.220 [2024-11-20 13:33:26.526109] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:27.220 [2024-11-20 13:33:26.526126] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:13:27.220 13:33:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.220 13:33:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:27.220 13:33:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:27.220 13:33:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:27.220 13:33:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.220 13:33:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:13:27.220 13:33:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:27.220 13:33:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.220 13:33:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:13:27.220 13:33:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:13:27.220 13:33:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:13:27.220 13:33:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 62733 00:13:27.220 13:33:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 62733 ']' 00:13:27.220 13:33:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 62733 00:13:27.220 13:33:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:13:27.220 13:33:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:27.220 13:33:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62733 00:13:27.220 killing process with pid 62733 00:13:27.220 13:33:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:27.220 13:33:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:27.220 13:33:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62733' 00:13:27.220 13:33:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 62733 00:13:27.220 13:33:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 62733 00:13:27.220 [2024-11-20 13:33:26.604644] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:27.220 [2024-11-20 13:33:26.621180] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:28.597 13:33:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:13:28.597 00:13:28.597 real 0m4.756s 00:13:28.597 user 0m6.743s 00:13:28.597 sys 0m0.837s 00:13:28.597 13:33:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:28.597 ************************************ 00:13:28.597 END TEST raid_state_function_test_sb 00:13:28.597 ************************************ 00:13:28.597 13:33:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:28.597 13:33:27 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 2 00:13:28.597 13:33:27 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:13:28.597 13:33:27 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:28.597 13:33:27 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:28.597 ************************************ 00:13:28.597 START TEST raid_superblock_test 00:13:28.597 ************************************ 00:13:28.597 13:33:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:13:28.597 13:33:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:13:28.597 13:33:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:13:28.597 13:33:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:13:28.597 13:33:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:13:28.597 13:33:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:13:28.597 13:33:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:13:28.598 13:33:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:13:28.598 13:33:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:13:28.598 13:33:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:13:28.598 13:33:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:13:28.598 13:33:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:13:28.598 13:33:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:13:28.598 13:33:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:13:28.598 13:33:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:13:28.598 13:33:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:13:28.598 13:33:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=62980 00:13:28.598 13:33:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 62980 00:13:28.598 13:33:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 62980 ']' 00:13:28.598 13:33:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:28.598 13:33:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:28.598 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:28.598 13:33:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:28.598 13:33:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:28.598 13:33:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:13:28.598 13:33:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:28.598 [2024-11-20 13:33:27.904185] Starting SPDK v25.01-pre git sha1 82b85d9ca / DPDK 24.03.0 initialization... 00:13:28.598 [2024-11-20 13:33:27.904318] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62980 ] 00:13:28.856 [2024-11-20 13:33:28.085107] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:28.856 [2024-11-20 13:33:28.197457] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:29.115 [2024-11-20 13:33:28.411138] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:29.116 [2024-11-20 13:33:28.411215] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:29.374 13:33:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:29.374 13:33:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:13:29.374 13:33:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:13:29.374 13:33:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:29.374 13:33:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:13:29.374 13:33:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:13:29.374 13:33:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:13:29.374 13:33:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:29.374 13:33:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:13:29.374 13:33:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:29.374 13:33:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:13:29.374 13:33:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.374 13:33:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:29.374 malloc1 00:13:29.374 13:33:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.374 13:33:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:29.374 13:33:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.374 13:33:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:29.374 [2024-11-20 13:33:28.811184] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:29.374 [2024-11-20 13:33:28.811405] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:29.374 [2024-11-20 13:33:28.811442] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:29.374 [2024-11-20 13:33:28.811455] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:29.374 [2024-11-20 13:33:28.814176] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:29.374 [2024-11-20 13:33:28.814217] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:29.375 pt1 00:13:29.375 13:33:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.375 13:33:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:13:29.375 13:33:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:29.375 13:33:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:13:29.375 13:33:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:13:29.375 13:33:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:13:29.375 13:33:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:29.375 13:33:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:13:29.375 13:33:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:29.375 13:33:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:13:29.375 13:33:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.375 13:33:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:29.375 malloc2 00:13:29.375 13:33:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.375 13:33:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:29.375 13:33:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.375 13:33:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:29.634 [2024-11-20 13:33:28.860848] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:29.634 [2024-11-20 13:33:28.860910] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:29.634 [2024-11-20 13:33:28.860940] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:29.634 [2024-11-20 13:33:28.860952] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:29.634 [2024-11-20 13:33:28.863355] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:29.634 [2024-11-20 13:33:28.863395] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:29.634 pt2 00:13:29.634 13:33:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.634 13:33:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:13:29.634 13:33:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:29.634 13:33:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:13:29.634 13:33:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.634 13:33:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:29.634 [2024-11-20 13:33:28.868879] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:29.634 [2024-11-20 13:33:28.870957] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:29.634 [2024-11-20 13:33:28.871317] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:13:29.634 [2024-11-20 13:33:28.871345] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:29.634 [2024-11-20 13:33:28.871645] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:13:29.634 [2024-11-20 13:33:28.871806] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:13:29.634 [2024-11-20 13:33:28.871827] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:13:29.634 [2024-11-20 13:33:28.871986] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:29.634 13:33:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.634 13:33:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:29.634 13:33:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:29.634 13:33:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:29.634 13:33:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:29.634 13:33:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:29.634 13:33:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:29.634 13:33:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:29.634 13:33:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:29.634 13:33:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:29.634 13:33:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:29.634 13:33:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:29.634 13:33:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:29.634 13:33:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.634 13:33:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:29.634 13:33:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.634 13:33:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:29.634 "name": "raid_bdev1", 00:13:29.634 "uuid": "818bbf85-5de2-45ce-8f39-96ca27aaf918", 00:13:29.634 "strip_size_kb": 0, 00:13:29.634 "state": "online", 00:13:29.634 "raid_level": "raid1", 00:13:29.634 "superblock": true, 00:13:29.634 "num_base_bdevs": 2, 00:13:29.634 "num_base_bdevs_discovered": 2, 00:13:29.634 "num_base_bdevs_operational": 2, 00:13:29.634 "base_bdevs_list": [ 00:13:29.634 { 00:13:29.634 "name": "pt1", 00:13:29.634 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:29.634 "is_configured": true, 00:13:29.634 "data_offset": 2048, 00:13:29.634 "data_size": 63488 00:13:29.634 }, 00:13:29.634 { 00:13:29.634 "name": "pt2", 00:13:29.634 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:29.634 "is_configured": true, 00:13:29.634 "data_offset": 2048, 00:13:29.634 "data_size": 63488 00:13:29.634 } 00:13:29.634 ] 00:13:29.634 }' 00:13:29.634 13:33:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:29.634 13:33:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:29.895 13:33:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:13:29.895 13:33:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:13:29.895 13:33:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:29.895 13:33:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:29.895 13:33:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:29.895 13:33:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:29.895 13:33:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:29.895 13:33:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.895 13:33:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:29.895 13:33:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:29.895 [2024-11-20 13:33:29.284525] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:29.895 13:33:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.895 13:33:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:29.895 "name": "raid_bdev1", 00:13:29.895 "aliases": [ 00:13:29.895 "818bbf85-5de2-45ce-8f39-96ca27aaf918" 00:13:29.895 ], 00:13:29.895 "product_name": "Raid Volume", 00:13:29.895 "block_size": 512, 00:13:29.895 "num_blocks": 63488, 00:13:29.895 "uuid": "818bbf85-5de2-45ce-8f39-96ca27aaf918", 00:13:29.895 "assigned_rate_limits": { 00:13:29.895 "rw_ios_per_sec": 0, 00:13:29.895 "rw_mbytes_per_sec": 0, 00:13:29.895 "r_mbytes_per_sec": 0, 00:13:29.895 "w_mbytes_per_sec": 0 00:13:29.895 }, 00:13:29.895 "claimed": false, 00:13:29.895 "zoned": false, 00:13:29.895 "supported_io_types": { 00:13:29.895 "read": true, 00:13:29.895 "write": true, 00:13:29.895 "unmap": false, 00:13:29.895 "flush": false, 00:13:29.895 "reset": true, 00:13:29.895 "nvme_admin": false, 00:13:29.895 "nvme_io": false, 00:13:29.895 "nvme_io_md": false, 00:13:29.895 "write_zeroes": true, 00:13:29.895 "zcopy": false, 00:13:29.895 "get_zone_info": false, 00:13:29.895 "zone_management": false, 00:13:29.895 "zone_append": false, 00:13:29.895 "compare": false, 00:13:29.895 "compare_and_write": false, 00:13:29.895 "abort": false, 00:13:29.895 "seek_hole": false, 00:13:29.895 "seek_data": false, 00:13:29.895 "copy": false, 00:13:29.895 "nvme_iov_md": false 00:13:29.895 }, 00:13:29.895 "memory_domains": [ 00:13:29.895 { 00:13:29.895 "dma_device_id": "system", 00:13:29.895 "dma_device_type": 1 00:13:29.895 }, 00:13:29.895 { 00:13:29.895 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:29.895 "dma_device_type": 2 00:13:29.895 }, 00:13:29.895 { 00:13:29.895 "dma_device_id": "system", 00:13:29.895 "dma_device_type": 1 00:13:29.895 }, 00:13:29.895 { 00:13:29.895 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:29.895 "dma_device_type": 2 00:13:29.895 } 00:13:29.895 ], 00:13:29.895 "driver_specific": { 00:13:29.895 "raid": { 00:13:29.896 "uuid": "818bbf85-5de2-45ce-8f39-96ca27aaf918", 00:13:29.896 "strip_size_kb": 0, 00:13:29.896 "state": "online", 00:13:29.896 "raid_level": "raid1", 00:13:29.896 "superblock": true, 00:13:29.896 "num_base_bdevs": 2, 00:13:29.896 "num_base_bdevs_discovered": 2, 00:13:29.896 "num_base_bdevs_operational": 2, 00:13:29.896 "base_bdevs_list": [ 00:13:29.896 { 00:13:29.896 "name": "pt1", 00:13:29.896 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:29.896 "is_configured": true, 00:13:29.896 "data_offset": 2048, 00:13:29.896 "data_size": 63488 00:13:29.896 }, 00:13:29.896 { 00:13:29.896 "name": "pt2", 00:13:29.896 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:29.896 "is_configured": true, 00:13:29.896 "data_offset": 2048, 00:13:29.896 "data_size": 63488 00:13:29.896 } 00:13:29.896 ] 00:13:29.896 } 00:13:29.896 } 00:13:29.896 }' 00:13:29.896 13:33:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:29.896 13:33:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:13:29.896 pt2' 00:13:29.896 13:33:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:30.155 13:33:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:30.155 13:33:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:30.155 13:33:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:13:30.155 13:33:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.155 13:33:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:30.155 13:33:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:30.155 13:33:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.155 13:33:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:30.155 13:33:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:30.155 13:33:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:30.155 13:33:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:13:30.155 13:33:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.155 13:33:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:30.155 13:33:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:30.155 13:33:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.155 13:33:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:30.155 13:33:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:30.155 13:33:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:30.155 13:33:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.155 13:33:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:30.155 13:33:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:13:30.155 [2024-11-20 13:33:29.492267] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:30.155 13:33:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.155 13:33:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=818bbf85-5de2-45ce-8f39-96ca27aaf918 00:13:30.155 13:33:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 818bbf85-5de2-45ce-8f39-96ca27aaf918 ']' 00:13:30.155 13:33:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:30.155 13:33:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.155 13:33:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:30.155 [2024-11-20 13:33:29.531888] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:30.155 [2024-11-20 13:33:29.531917] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:30.155 [2024-11-20 13:33:29.531997] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:30.155 [2024-11-20 13:33:29.532068] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:30.155 [2024-11-20 13:33:29.532084] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:13:30.155 13:33:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.155 13:33:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:30.155 13:33:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.155 13:33:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:30.155 13:33:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:13:30.155 13:33:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.155 13:33:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:13:30.155 13:33:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:13:30.155 13:33:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:30.155 13:33:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:13:30.155 13:33:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.155 13:33:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:30.155 13:33:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.155 13:33:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:30.155 13:33:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:13:30.155 13:33:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.155 13:33:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:30.155 13:33:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.155 13:33:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:13:30.155 13:33:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.155 13:33:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:30.155 13:33:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:13:30.155 13:33:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.415 13:33:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:13:30.415 13:33:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:13:30.415 13:33:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:13:30.415 13:33:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:13:30.415 13:33:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:13:30.415 13:33:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:30.415 13:33:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:13:30.415 13:33:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:30.415 13:33:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:13:30.415 13:33:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.415 13:33:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:30.415 [2024-11-20 13:33:29.647761] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:13:30.415 [2024-11-20 13:33:29.649847] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:13:30.415 [2024-11-20 13:33:29.649918] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:13:30.415 [2024-11-20 13:33:29.649972] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:13:30.415 [2024-11-20 13:33:29.649990] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:30.415 [2024-11-20 13:33:29.650002] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:13:30.415 request: 00:13:30.415 { 00:13:30.415 "name": "raid_bdev1", 00:13:30.415 "raid_level": "raid1", 00:13:30.415 "base_bdevs": [ 00:13:30.415 "malloc1", 00:13:30.415 "malloc2" 00:13:30.415 ], 00:13:30.415 "superblock": false, 00:13:30.415 "method": "bdev_raid_create", 00:13:30.415 "req_id": 1 00:13:30.415 } 00:13:30.415 Got JSON-RPC error response 00:13:30.415 response: 00:13:30.415 { 00:13:30.415 "code": -17, 00:13:30.415 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:13:30.415 } 00:13:30.415 13:33:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:13:30.415 13:33:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:13:30.415 13:33:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:30.415 13:33:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:30.415 13:33:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:30.415 13:33:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:13:30.415 13:33:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:30.415 13:33:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.415 13:33:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:30.415 13:33:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.415 13:33:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:13:30.415 13:33:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:13:30.415 13:33:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:30.415 13:33:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.415 13:33:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:30.415 [2024-11-20 13:33:29.703661] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:30.415 [2024-11-20 13:33:29.703721] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:30.415 [2024-11-20 13:33:29.703743] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:13:30.415 [2024-11-20 13:33:29.703757] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:30.415 [2024-11-20 13:33:29.706229] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:30.415 [2024-11-20 13:33:29.706278] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:30.415 [2024-11-20 13:33:29.706372] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:13:30.415 [2024-11-20 13:33:29.706434] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:30.415 pt1 00:13:30.415 13:33:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.415 13:33:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:13:30.415 13:33:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:30.415 13:33:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:30.415 13:33:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:30.415 13:33:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:30.415 13:33:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:30.415 13:33:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:30.415 13:33:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:30.415 13:33:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:30.415 13:33:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:30.415 13:33:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:30.415 13:33:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.415 13:33:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:30.415 13:33:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:30.415 13:33:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.415 13:33:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:30.415 "name": "raid_bdev1", 00:13:30.415 "uuid": "818bbf85-5de2-45ce-8f39-96ca27aaf918", 00:13:30.415 "strip_size_kb": 0, 00:13:30.415 "state": "configuring", 00:13:30.415 "raid_level": "raid1", 00:13:30.415 "superblock": true, 00:13:30.415 "num_base_bdevs": 2, 00:13:30.415 "num_base_bdevs_discovered": 1, 00:13:30.415 "num_base_bdevs_operational": 2, 00:13:30.415 "base_bdevs_list": [ 00:13:30.415 { 00:13:30.415 "name": "pt1", 00:13:30.415 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:30.415 "is_configured": true, 00:13:30.415 "data_offset": 2048, 00:13:30.415 "data_size": 63488 00:13:30.415 }, 00:13:30.415 { 00:13:30.415 "name": null, 00:13:30.415 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:30.415 "is_configured": false, 00:13:30.415 "data_offset": 2048, 00:13:30.415 "data_size": 63488 00:13:30.415 } 00:13:30.415 ] 00:13:30.415 }' 00:13:30.415 13:33:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:30.415 13:33:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:30.674 13:33:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:13:30.674 13:33:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:13:30.674 13:33:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:13:30.674 13:33:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:30.674 13:33:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.674 13:33:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:30.674 [2024-11-20 13:33:30.087200] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:30.674 [2024-11-20 13:33:30.087282] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:30.674 [2024-11-20 13:33:30.087305] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:13:30.674 [2024-11-20 13:33:30.087319] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:30.674 [2024-11-20 13:33:30.087774] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:30.674 [2024-11-20 13:33:30.087809] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:30.674 [2024-11-20 13:33:30.087898] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:13:30.674 [2024-11-20 13:33:30.087928] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:30.674 [2024-11-20 13:33:30.088048] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:13:30.674 [2024-11-20 13:33:30.088082] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:30.674 [2024-11-20 13:33:30.088346] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:13:30.674 [2024-11-20 13:33:30.088501] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:13:30.674 [2024-11-20 13:33:30.088511] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:13:30.674 [2024-11-20 13:33:30.088666] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:30.674 pt2 00:13:30.674 13:33:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.674 13:33:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:13:30.674 13:33:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:13:30.674 13:33:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:30.674 13:33:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:30.674 13:33:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:30.674 13:33:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:30.674 13:33:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:30.674 13:33:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:30.674 13:33:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:30.674 13:33:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:30.674 13:33:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:30.674 13:33:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:30.674 13:33:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:30.674 13:33:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:30.674 13:33:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.674 13:33:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:30.674 13:33:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.674 13:33:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:30.674 "name": "raid_bdev1", 00:13:30.674 "uuid": "818bbf85-5de2-45ce-8f39-96ca27aaf918", 00:13:30.674 "strip_size_kb": 0, 00:13:30.674 "state": "online", 00:13:30.674 "raid_level": "raid1", 00:13:30.674 "superblock": true, 00:13:30.674 "num_base_bdevs": 2, 00:13:30.674 "num_base_bdevs_discovered": 2, 00:13:30.674 "num_base_bdevs_operational": 2, 00:13:30.674 "base_bdevs_list": [ 00:13:30.674 { 00:13:30.674 "name": "pt1", 00:13:30.675 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:30.675 "is_configured": true, 00:13:30.675 "data_offset": 2048, 00:13:30.675 "data_size": 63488 00:13:30.675 }, 00:13:30.675 { 00:13:30.675 "name": "pt2", 00:13:30.675 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:30.675 "is_configured": true, 00:13:30.675 "data_offset": 2048, 00:13:30.675 "data_size": 63488 00:13:30.675 } 00:13:30.675 ] 00:13:30.675 }' 00:13:30.675 13:33:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:30.675 13:33:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:31.242 13:33:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:13:31.242 13:33:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:13:31.242 13:33:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:31.242 13:33:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:31.242 13:33:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:31.242 13:33:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:31.242 13:33:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:31.242 13:33:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.242 13:33:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:31.242 13:33:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:31.242 [2024-11-20 13:33:30.522770] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:31.242 13:33:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.242 13:33:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:31.242 "name": "raid_bdev1", 00:13:31.242 "aliases": [ 00:13:31.242 "818bbf85-5de2-45ce-8f39-96ca27aaf918" 00:13:31.242 ], 00:13:31.242 "product_name": "Raid Volume", 00:13:31.242 "block_size": 512, 00:13:31.242 "num_blocks": 63488, 00:13:31.242 "uuid": "818bbf85-5de2-45ce-8f39-96ca27aaf918", 00:13:31.242 "assigned_rate_limits": { 00:13:31.242 "rw_ios_per_sec": 0, 00:13:31.242 "rw_mbytes_per_sec": 0, 00:13:31.242 "r_mbytes_per_sec": 0, 00:13:31.242 "w_mbytes_per_sec": 0 00:13:31.242 }, 00:13:31.242 "claimed": false, 00:13:31.242 "zoned": false, 00:13:31.242 "supported_io_types": { 00:13:31.242 "read": true, 00:13:31.242 "write": true, 00:13:31.242 "unmap": false, 00:13:31.242 "flush": false, 00:13:31.242 "reset": true, 00:13:31.242 "nvme_admin": false, 00:13:31.242 "nvme_io": false, 00:13:31.242 "nvme_io_md": false, 00:13:31.242 "write_zeroes": true, 00:13:31.242 "zcopy": false, 00:13:31.242 "get_zone_info": false, 00:13:31.242 "zone_management": false, 00:13:31.242 "zone_append": false, 00:13:31.242 "compare": false, 00:13:31.242 "compare_and_write": false, 00:13:31.242 "abort": false, 00:13:31.242 "seek_hole": false, 00:13:31.242 "seek_data": false, 00:13:31.242 "copy": false, 00:13:31.242 "nvme_iov_md": false 00:13:31.242 }, 00:13:31.242 "memory_domains": [ 00:13:31.242 { 00:13:31.242 "dma_device_id": "system", 00:13:31.242 "dma_device_type": 1 00:13:31.242 }, 00:13:31.242 { 00:13:31.242 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:31.242 "dma_device_type": 2 00:13:31.242 }, 00:13:31.242 { 00:13:31.242 "dma_device_id": "system", 00:13:31.242 "dma_device_type": 1 00:13:31.242 }, 00:13:31.242 { 00:13:31.242 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:31.242 "dma_device_type": 2 00:13:31.242 } 00:13:31.242 ], 00:13:31.242 "driver_specific": { 00:13:31.242 "raid": { 00:13:31.242 "uuid": "818bbf85-5de2-45ce-8f39-96ca27aaf918", 00:13:31.242 "strip_size_kb": 0, 00:13:31.242 "state": "online", 00:13:31.242 "raid_level": "raid1", 00:13:31.242 "superblock": true, 00:13:31.242 "num_base_bdevs": 2, 00:13:31.242 "num_base_bdevs_discovered": 2, 00:13:31.242 "num_base_bdevs_operational": 2, 00:13:31.242 "base_bdevs_list": [ 00:13:31.242 { 00:13:31.242 "name": "pt1", 00:13:31.242 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:31.242 "is_configured": true, 00:13:31.242 "data_offset": 2048, 00:13:31.242 "data_size": 63488 00:13:31.242 }, 00:13:31.242 { 00:13:31.242 "name": "pt2", 00:13:31.242 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:31.242 "is_configured": true, 00:13:31.242 "data_offset": 2048, 00:13:31.242 "data_size": 63488 00:13:31.242 } 00:13:31.242 ] 00:13:31.243 } 00:13:31.243 } 00:13:31.243 }' 00:13:31.243 13:33:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:31.243 13:33:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:13:31.243 pt2' 00:13:31.243 13:33:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:31.243 13:33:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:31.243 13:33:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:31.243 13:33:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:13:31.243 13:33:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.243 13:33:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:31.243 13:33:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:31.243 13:33:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.243 13:33:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:31.243 13:33:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:31.243 13:33:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:31.243 13:33:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:13:31.243 13:33:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:31.243 13:33:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.243 13:33:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:31.243 13:33:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.502 13:33:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:31.502 13:33:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:31.502 13:33:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:31.502 13:33:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.502 13:33:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:31.502 13:33:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:13:31.502 [2024-11-20 13:33:30.734555] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:31.502 13:33:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.502 13:33:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 818bbf85-5de2-45ce-8f39-96ca27aaf918 '!=' 818bbf85-5de2-45ce-8f39-96ca27aaf918 ']' 00:13:31.502 13:33:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:13:31.502 13:33:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:31.502 13:33:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:13:31.502 13:33:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:13:31.502 13:33:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.502 13:33:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:31.502 [2024-11-20 13:33:30.770305] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:13:31.502 13:33:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.502 13:33:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:31.502 13:33:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:31.502 13:33:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:31.502 13:33:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:31.502 13:33:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:31.502 13:33:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:31.502 13:33:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:31.502 13:33:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:31.502 13:33:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:31.502 13:33:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:31.502 13:33:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:31.502 13:33:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:31.502 13:33:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.502 13:33:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:31.502 13:33:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.502 13:33:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:31.502 "name": "raid_bdev1", 00:13:31.502 "uuid": "818bbf85-5de2-45ce-8f39-96ca27aaf918", 00:13:31.502 "strip_size_kb": 0, 00:13:31.502 "state": "online", 00:13:31.502 "raid_level": "raid1", 00:13:31.502 "superblock": true, 00:13:31.502 "num_base_bdevs": 2, 00:13:31.502 "num_base_bdevs_discovered": 1, 00:13:31.502 "num_base_bdevs_operational": 1, 00:13:31.502 "base_bdevs_list": [ 00:13:31.502 { 00:13:31.502 "name": null, 00:13:31.502 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:31.502 "is_configured": false, 00:13:31.502 "data_offset": 0, 00:13:31.502 "data_size": 63488 00:13:31.502 }, 00:13:31.502 { 00:13:31.502 "name": "pt2", 00:13:31.502 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:31.502 "is_configured": true, 00:13:31.502 "data_offset": 2048, 00:13:31.502 "data_size": 63488 00:13:31.502 } 00:13:31.502 ] 00:13:31.502 }' 00:13:31.502 13:33:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:31.502 13:33:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:31.761 13:33:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:31.761 13:33:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.761 13:33:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:31.761 [2024-11-20 13:33:31.182073] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:31.761 [2024-11-20 13:33:31.182103] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:31.761 [2024-11-20 13:33:31.182183] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:31.761 [2024-11-20 13:33:31.182230] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:31.761 [2024-11-20 13:33:31.182243] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:13:31.761 13:33:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.761 13:33:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:31.761 13:33:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:13:31.761 13:33:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.761 13:33:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:31.761 13:33:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.761 13:33:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:13:31.761 13:33:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:13:31.761 13:33:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:13:31.761 13:33:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:13:31.761 13:33:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:13:31.761 13:33:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.761 13:33:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:31.761 13:33:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.761 13:33:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:13:31.761 13:33:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:13:31.761 13:33:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:13:31.762 13:33:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:13:31.762 13:33:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=1 00:13:31.762 13:33:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:31.762 13:33:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.762 13:33:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:31.762 [2024-11-20 13:33:31.245935] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:31.762 [2024-11-20 13:33:31.245993] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:31.762 [2024-11-20 13:33:31.246011] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:13:31.762 [2024-11-20 13:33:31.246025] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:32.020 [2024-11-20 13:33:31.248425] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:32.021 [2024-11-20 13:33:31.248470] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:32.021 [2024-11-20 13:33:31.248547] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:13:32.021 [2024-11-20 13:33:31.248590] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:32.021 [2024-11-20 13:33:31.248684] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:13:32.021 [2024-11-20 13:33:31.248699] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:32.021 [2024-11-20 13:33:31.248935] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:13:32.021 [2024-11-20 13:33:31.249097] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:13:32.021 [2024-11-20 13:33:31.249112] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:13:32.021 [2024-11-20 13:33:31.249250] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:32.021 pt2 00:13:32.021 13:33:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.021 13:33:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:32.021 13:33:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:32.021 13:33:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:32.021 13:33:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:32.021 13:33:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:32.021 13:33:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:32.021 13:33:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:32.021 13:33:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:32.021 13:33:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:32.021 13:33:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:32.021 13:33:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:32.021 13:33:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:32.021 13:33:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.021 13:33:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:32.021 13:33:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.021 13:33:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:32.021 "name": "raid_bdev1", 00:13:32.021 "uuid": "818bbf85-5de2-45ce-8f39-96ca27aaf918", 00:13:32.021 "strip_size_kb": 0, 00:13:32.021 "state": "online", 00:13:32.021 "raid_level": "raid1", 00:13:32.021 "superblock": true, 00:13:32.021 "num_base_bdevs": 2, 00:13:32.021 "num_base_bdevs_discovered": 1, 00:13:32.021 "num_base_bdevs_operational": 1, 00:13:32.021 "base_bdevs_list": [ 00:13:32.021 { 00:13:32.021 "name": null, 00:13:32.021 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:32.021 "is_configured": false, 00:13:32.021 "data_offset": 2048, 00:13:32.021 "data_size": 63488 00:13:32.021 }, 00:13:32.021 { 00:13:32.021 "name": "pt2", 00:13:32.021 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:32.021 "is_configured": true, 00:13:32.021 "data_offset": 2048, 00:13:32.021 "data_size": 63488 00:13:32.021 } 00:13:32.021 ] 00:13:32.021 }' 00:13:32.021 13:33:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:32.021 13:33:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:32.279 13:33:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:32.279 13:33:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.279 13:33:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:32.279 [2024-11-20 13:33:31.645368] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:32.279 [2024-11-20 13:33:31.645407] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:32.279 [2024-11-20 13:33:31.645480] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:32.279 [2024-11-20 13:33:31.645532] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:32.279 [2024-11-20 13:33:31.645543] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:13:32.279 13:33:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.279 13:33:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:13:32.279 13:33:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:32.279 13:33:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.279 13:33:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:32.279 13:33:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.279 13:33:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:13:32.279 13:33:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:13:32.279 13:33:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:13:32.279 13:33:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:32.279 13:33:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.279 13:33:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:32.279 [2024-11-20 13:33:31.693292] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:32.279 [2024-11-20 13:33:31.693354] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:32.279 [2024-11-20 13:33:31.693375] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:13:32.279 [2024-11-20 13:33:31.693386] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:32.279 [2024-11-20 13:33:31.695798] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:32.279 [2024-11-20 13:33:31.695840] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:32.279 [2024-11-20 13:33:31.695919] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:13:32.279 [2024-11-20 13:33:31.695968] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:32.279 [2024-11-20 13:33:31.696127] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:13:32.279 [2024-11-20 13:33:31.696140] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:32.279 [2024-11-20 13:33:31.696157] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:13:32.279 [2024-11-20 13:33:31.696205] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:32.279 [2024-11-20 13:33:31.696271] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:13:32.279 [2024-11-20 13:33:31.696280] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:32.279 [2024-11-20 13:33:31.696530] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:13:32.279 [2024-11-20 13:33:31.696669] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:13:32.279 [2024-11-20 13:33:31.696683] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:13:32.279 [2024-11-20 13:33:31.696816] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:32.279 pt1 00:13:32.279 13:33:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.279 13:33:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:13:32.279 13:33:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:32.279 13:33:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:32.279 13:33:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:32.279 13:33:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:32.279 13:33:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:32.279 13:33:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:32.279 13:33:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:32.279 13:33:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:32.279 13:33:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:32.279 13:33:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:32.280 13:33:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:32.280 13:33:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.280 13:33:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:32.280 13:33:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:32.280 13:33:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.280 13:33:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:32.280 "name": "raid_bdev1", 00:13:32.280 "uuid": "818bbf85-5de2-45ce-8f39-96ca27aaf918", 00:13:32.280 "strip_size_kb": 0, 00:13:32.280 "state": "online", 00:13:32.280 "raid_level": "raid1", 00:13:32.280 "superblock": true, 00:13:32.280 "num_base_bdevs": 2, 00:13:32.280 "num_base_bdevs_discovered": 1, 00:13:32.280 "num_base_bdevs_operational": 1, 00:13:32.280 "base_bdevs_list": [ 00:13:32.280 { 00:13:32.280 "name": null, 00:13:32.280 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:32.280 "is_configured": false, 00:13:32.280 "data_offset": 2048, 00:13:32.280 "data_size": 63488 00:13:32.280 }, 00:13:32.280 { 00:13:32.280 "name": "pt2", 00:13:32.280 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:32.280 "is_configured": true, 00:13:32.280 "data_offset": 2048, 00:13:32.280 "data_size": 63488 00:13:32.280 } 00:13:32.280 ] 00:13:32.280 }' 00:13:32.280 13:33:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:32.280 13:33:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:32.847 13:33:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:13:32.847 13:33:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.847 13:33:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:32.847 13:33:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:13:32.847 13:33:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.847 13:33:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:13:32.848 13:33:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:32.848 13:33:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.848 13:33:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:32.848 13:33:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:13:32.848 [2024-11-20 13:33:32.125378] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:32.848 13:33:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.848 13:33:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 818bbf85-5de2-45ce-8f39-96ca27aaf918 '!=' 818bbf85-5de2-45ce-8f39-96ca27aaf918 ']' 00:13:32.848 13:33:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 62980 00:13:32.848 13:33:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 62980 ']' 00:13:32.848 13:33:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 62980 00:13:32.848 13:33:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:13:32.848 13:33:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:32.848 13:33:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62980 00:13:32.848 13:33:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:32.848 13:33:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:32.848 killing process with pid 62980 00:13:32.848 13:33:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62980' 00:13:32.848 13:33:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 62980 00:13:32.848 [2024-11-20 13:33:32.207452] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:32.848 [2024-11-20 13:33:32.207542] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:32.848 [2024-11-20 13:33:32.207590] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:32.848 13:33:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 62980 00:13:32.848 [2024-11-20 13:33:32.207610] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:13:33.107 [2024-11-20 13:33:32.417089] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:34.486 13:33:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:13:34.486 00:13:34.486 real 0m5.721s 00:13:34.487 user 0m8.611s 00:13:34.487 sys 0m1.036s 00:13:34.487 13:33:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:34.487 13:33:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:34.487 ************************************ 00:13:34.487 END TEST raid_superblock_test 00:13:34.487 ************************************ 00:13:34.487 13:33:33 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 2 read 00:13:34.487 13:33:33 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:13:34.487 13:33:33 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:34.487 13:33:33 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:34.487 ************************************ 00:13:34.487 START TEST raid_read_error_test 00:13:34.487 ************************************ 00:13:34.487 13:33:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 2 read 00:13:34.487 13:33:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:13:34.487 13:33:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:13:34.487 13:33:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:13:34.487 13:33:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:13:34.487 13:33:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:34.487 13:33:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:13:34.487 13:33:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:34.487 13:33:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:34.487 13:33:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:13:34.487 13:33:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:34.487 13:33:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:34.487 13:33:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:13:34.487 13:33:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:13:34.487 13:33:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:13:34.487 13:33:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:13:34.487 13:33:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:13:34.487 13:33:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:13:34.487 13:33:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:13:34.487 13:33:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:13:34.487 13:33:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:13:34.487 13:33:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:13:34.487 13:33:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.m6PniipfNS 00:13:34.487 13:33:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=63309 00:13:34.487 13:33:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:13:34.487 13:33:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 63309 00:13:34.487 13:33:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 63309 ']' 00:13:34.487 13:33:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:34.487 13:33:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:34.487 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:34.487 13:33:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:34.487 13:33:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:34.487 13:33:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:34.487 [2024-11-20 13:33:33.722547] Starting SPDK v25.01-pre git sha1 82b85d9ca / DPDK 24.03.0 initialization... 00:13:34.487 [2024-11-20 13:33:33.722680] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63309 ] 00:13:34.487 [2024-11-20 13:33:33.902696] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:34.746 [2024-11-20 13:33:34.022939] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:35.004 [2024-11-20 13:33:34.236427] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:35.004 [2024-11-20 13:33:34.236475] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:35.263 13:33:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:35.263 13:33:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:13:35.263 13:33:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:35.263 13:33:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:35.263 13:33:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.263 13:33:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.263 BaseBdev1_malloc 00:13:35.263 13:33:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.263 13:33:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:13:35.263 13:33:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.263 13:33:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.263 true 00:13:35.263 13:33:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.263 13:33:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:13:35.263 13:33:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.263 13:33:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.263 [2024-11-20 13:33:34.610618] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:13:35.263 [2024-11-20 13:33:34.610679] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:35.263 [2024-11-20 13:33:34.610701] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:13:35.263 [2024-11-20 13:33:34.610715] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:35.263 [2024-11-20 13:33:34.613058] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:35.263 [2024-11-20 13:33:34.613122] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:35.263 BaseBdev1 00:13:35.263 13:33:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.263 13:33:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:35.263 13:33:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:35.263 13:33:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.264 13:33:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.264 BaseBdev2_malloc 00:13:35.264 13:33:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.264 13:33:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:13:35.264 13:33:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.264 13:33:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.264 true 00:13:35.264 13:33:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.264 13:33:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:13:35.264 13:33:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.264 13:33:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.264 [2024-11-20 13:33:34.675911] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:13:35.264 [2024-11-20 13:33:34.675982] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:35.264 [2024-11-20 13:33:34.676014] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:13:35.264 [2024-11-20 13:33:34.676038] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:35.264 [2024-11-20 13:33:34.678587] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:35.264 [2024-11-20 13:33:34.678634] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:35.264 BaseBdev2 00:13:35.264 13:33:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.264 13:33:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:13:35.264 13:33:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.264 13:33:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.264 [2024-11-20 13:33:34.687952] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:35.264 [2024-11-20 13:33:34.689996] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:35.264 [2024-11-20 13:33:34.690216] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:13:35.264 [2024-11-20 13:33:34.690240] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:35.264 [2024-11-20 13:33:34.690508] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:13:35.264 [2024-11-20 13:33:34.690695] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:13:35.264 [2024-11-20 13:33:34.690708] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:13:35.264 [2024-11-20 13:33:34.690863] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:35.264 13:33:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.264 13:33:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:35.264 13:33:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:35.264 13:33:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:35.264 13:33:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:35.264 13:33:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:35.264 13:33:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:35.264 13:33:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:35.264 13:33:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:35.264 13:33:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:35.264 13:33:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:35.264 13:33:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:35.264 13:33:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.264 13:33:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:35.264 13:33:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.264 13:33:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.264 13:33:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:35.264 "name": "raid_bdev1", 00:13:35.264 "uuid": "e924bf5e-7331-4210-a927-82dbd5458611", 00:13:35.264 "strip_size_kb": 0, 00:13:35.264 "state": "online", 00:13:35.264 "raid_level": "raid1", 00:13:35.264 "superblock": true, 00:13:35.264 "num_base_bdevs": 2, 00:13:35.264 "num_base_bdevs_discovered": 2, 00:13:35.264 "num_base_bdevs_operational": 2, 00:13:35.264 "base_bdevs_list": [ 00:13:35.264 { 00:13:35.264 "name": "BaseBdev1", 00:13:35.264 "uuid": "4fd76f17-5ab5-52bd-b645-4c945faa0218", 00:13:35.264 "is_configured": true, 00:13:35.264 "data_offset": 2048, 00:13:35.264 "data_size": 63488 00:13:35.264 }, 00:13:35.264 { 00:13:35.264 "name": "BaseBdev2", 00:13:35.264 "uuid": "5e6b27ff-d662-5801-977f-cdd9b4a57938", 00:13:35.264 "is_configured": true, 00:13:35.264 "data_offset": 2048, 00:13:35.264 "data_size": 63488 00:13:35.264 } 00:13:35.264 ] 00:13:35.264 }' 00:13:35.264 13:33:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:35.264 13:33:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.833 13:33:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:35.833 13:33:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:13:35.833 [2024-11-20 13:33:35.152725] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:13:36.793 13:33:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:13:36.793 13:33:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.793 13:33:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.793 13:33:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.793 13:33:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:13:36.793 13:33:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:13:36.793 13:33:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:13:36.794 13:33:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:13:36.794 13:33:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:36.794 13:33:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:36.794 13:33:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:36.794 13:33:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:36.794 13:33:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:36.794 13:33:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:36.794 13:33:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:36.794 13:33:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:36.794 13:33:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:36.794 13:33:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:36.794 13:33:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:36.794 13:33:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:36.794 13:33:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.794 13:33:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.794 13:33:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.794 13:33:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:36.794 "name": "raid_bdev1", 00:13:36.794 "uuid": "e924bf5e-7331-4210-a927-82dbd5458611", 00:13:36.794 "strip_size_kb": 0, 00:13:36.794 "state": "online", 00:13:36.794 "raid_level": "raid1", 00:13:36.794 "superblock": true, 00:13:36.794 "num_base_bdevs": 2, 00:13:36.794 "num_base_bdevs_discovered": 2, 00:13:36.794 "num_base_bdevs_operational": 2, 00:13:36.794 "base_bdevs_list": [ 00:13:36.794 { 00:13:36.794 "name": "BaseBdev1", 00:13:36.794 "uuid": "4fd76f17-5ab5-52bd-b645-4c945faa0218", 00:13:36.794 "is_configured": true, 00:13:36.794 "data_offset": 2048, 00:13:36.794 "data_size": 63488 00:13:36.794 }, 00:13:36.794 { 00:13:36.794 "name": "BaseBdev2", 00:13:36.794 "uuid": "5e6b27ff-d662-5801-977f-cdd9b4a57938", 00:13:36.794 "is_configured": true, 00:13:36.794 "data_offset": 2048, 00:13:36.794 "data_size": 63488 00:13:36.794 } 00:13:36.794 ] 00:13:36.794 }' 00:13:36.794 13:33:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:36.794 13:33:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:37.054 13:33:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:37.054 13:33:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.054 13:33:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:37.054 [2024-11-20 13:33:36.453801] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:37.054 [2024-11-20 13:33:36.453845] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:37.054 [2024-11-20 13:33:36.456559] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:37.054 [2024-11-20 13:33:36.456611] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:37.054 [2024-11-20 13:33:36.456690] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:37.054 [2024-11-20 13:33:36.456705] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:13:37.054 { 00:13:37.054 "results": [ 00:13:37.054 { 00:13:37.054 "job": "raid_bdev1", 00:13:37.054 "core_mask": "0x1", 00:13:37.054 "workload": "randrw", 00:13:37.054 "percentage": 50, 00:13:37.054 "status": "finished", 00:13:37.054 "queue_depth": 1, 00:13:37.054 "io_size": 131072, 00:13:37.054 "runtime": 1.301213, 00:13:37.054 "iops": 19640.904294685035, 00:13:37.054 "mibps": 2455.1130368356294, 00:13:37.054 "io_failed": 0, 00:13:37.054 "io_timeout": 0, 00:13:37.054 "avg_latency_us": 48.28415047677505, 00:13:37.054 "min_latency_us": 24.057831325301205, 00:13:37.054 "max_latency_us": 2158.213654618474 00:13:37.054 } 00:13:37.054 ], 00:13:37.054 "core_count": 1 00:13:37.054 } 00:13:37.054 13:33:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.054 13:33:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 63309 00:13:37.054 13:33:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 63309 ']' 00:13:37.054 13:33:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 63309 00:13:37.054 13:33:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:13:37.054 13:33:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:37.054 13:33:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63309 00:13:37.054 13:33:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:37.054 13:33:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:37.054 killing process with pid 63309 00:13:37.054 13:33:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63309' 00:13:37.054 13:33:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 63309 00:13:37.054 [2024-11-20 13:33:36.504476] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:37.054 13:33:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 63309 00:13:37.312 [2024-11-20 13:33:36.639375] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:38.688 13:33:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.m6PniipfNS 00:13:38.688 13:33:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:13:38.688 13:33:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:13:38.688 13:33:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:13:38.688 13:33:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:13:38.688 13:33:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:38.688 13:33:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:13:38.688 13:33:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:13:38.688 00:13:38.688 real 0m4.210s 00:13:38.688 user 0m4.927s 00:13:38.688 sys 0m0.543s 00:13:38.688 13:33:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:38.688 13:33:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.688 ************************************ 00:13:38.688 END TEST raid_read_error_test 00:13:38.688 ************************************ 00:13:38.688 13:33:37 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 2 write 00:13:38.688 13:33:37 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:13:38.688 13:33:37 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:38.688 13:33:37 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:38.688 ************************************ 00:13:38.688 START TEST raid_write_error_test 00:13:38.688 ************************************ 00:13:38.688 13:33:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 2 write 00:13:38.688 13:33:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:13:38.688 13:33:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:13:38.688 13:33:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:13:38.688 13:33:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:13:38.688 13:33:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:38.688 13:33:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:13:38.688 13:33:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:38.688 13:33:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:38.688 13:33:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:13:38.688 13:33:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:38.688 13:33:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:38.688 13:33:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:13:38.688 13:33:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:13:38.688 13:33:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:13:38.688 13:33:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:13:38.688 13:33:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:13:38.688 13:33:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:13:38.688 13:33:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:13:38.688 13:33:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:13:38.688 13:33:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:13:38.688 13:33:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:13:38.688 13:33:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.LurTS5686p 00:13:38.688 13:33:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=63449 00:13:38.688 13:33:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 63449 00:13:38.688 13:33:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:13:38.688 13:33:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 63449 ']' 00:13:38.688 13:33:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:38.688 13:33:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:38.688 13:33:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:38.688 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:38.688 13:33:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:38.688 13:33:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.688 [2024-11-20 13:33:38.015799] Starting SPDK v25.01-pre git sha1 82b85d9ca / DPDK 24.03.0 initialization... 00:13:38.689 [2024-11-20 13:33:38.015919] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63449 ] 00:13:38.947 [2024-11-20 13:33:38.195179] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:38.947 [2024-11-20 13:33:38.314182] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:39.205 [2024-11-20 13:33:38.524245] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:39.205 [2024-11-20 13:33:38.524297] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:39.463 13:33:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:39.463 13:33:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:13:39.463 13:33:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:39.463 13:33:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:39.463 13:33:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.464 13:33:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.464 BaseBdev1_malloc 00:13:39.464 13:33:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.464 13:33:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:13:39.464 13:33:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.464 13:33:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.464 true 00:13:39.464 13:33:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.464 13:33:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:13:39.464 13:33:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.464 13:33:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.464 [2024-11-20 13:33:38.944900] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:13:39.464 [2024-11-20 13:33:38.944982] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:39.464 [2024-11-20 13:33:38.945008] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:13:39.464 [2024-11-20 13:33:38.945035] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:39.464 [2024-11-20 13:33:38.947669] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:39.464 [2024-11-20 13:33:38.947732] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:39.723 BaseBdev1 00:13:39.723 13:33:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.723 13:33:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:39.723 13:33:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:39.723 13:33:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.723 13:33:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.723 BaseBdev2_malloc 00:13:39.723 13:33:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.723 13:33:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:13:39.723 13:33:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.723 13:33:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.723 true 00:13:39.723 13:33:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.723 13:33:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:13:39.723 13:33:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.723 13:33:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.723 [2024-11-20 13:33:39.015923] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:13:39.723 [2024-11-20 13:33:39.015987] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:39.723 [2024-11-20 13:33:39.016010] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:13:39.723 [2024-11-20 13:33:39.016025] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:39.723 [2024-11-20 13:33:39.018701] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:39.723 [2024-11-20 13:33:39.018750] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:39.723 BaseBdev2 00:13:39.723 13:33:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.723 13:33:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:13:39.723 13:33:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.723 13:33:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.723 [2024-11-20 13:33:39.027969] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:39.723 [2024-11-20 13:33:39.030317] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:39.723 [2024-11-20 13:33:39.030536] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:13:39.723 [2024-11-20 13:33:39.030556] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:39.723 [2024-11-20 13:33:39.030826] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:13:39.723 [2024-11-20 13:33:39.031002] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:13:39.723 [2024-11-20 13:33:39.031013] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:13:39.723 [2024-11-20 13:33:39.031209] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:39.723 13:33:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.723 13:33:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:39.723 13:33:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:39.723 13:33:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:39.723 13:33:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:39.723 13:33:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:39.723 13:33:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:39.723 13:33:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:39.723 13:33:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:39.723 13:33:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:39.723 13:33:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:39.723 13:33:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:39.723 13:33:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:39.723 13:33:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.723 13:33:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.723 13:33:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.723 13:33:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:39.723 "name": "raid_bdev1", 00:13:39.723 "uuid": "2a1c4b63-a99a-4360-a82e-151b848ec885", 00:13:39.723 "strip_size_kb": 0, 00:13:39.723 "state": "online", 00:13:39.723 "raid_level": "raid1", 00:13:39.723 "superblock": true, 00:13:39.723 "num_base_bdevs": 2, 00:13:39.723 "num_base_bdevs_discovered": 2, 00:13:39.723 "num_base_bdevs_operational": 2, 00:13:39.723 "base_bdevs_list": [ 00:13:39.723 { 00:13:39.723 "name": "BaseBdev1", 00:13:39.723 "uuid": "13530cbe-f653-5547-9a65-9e6d40b2ef44", 00:13:39.723 "is_configured": true, 00:13:39.723 "data_offset": 2048, 00:13:39.723 "data_size": 63488 00:13:39.723 }, 00:13:39.723 { 00:13:39.723 "name": "BaseBdev2", 00:13:39.723 "uuid": "a19fba8a-a73a-54a3-a5bc-49205695fe74", 00:13:39.723 "is_configured": true, 00:13:39.723 "data_offset": 2048, 00:13:39.723 "data_size": 63488 00:13:39.723 } 00:13:39.723 ] 00:13:39.723 }' 00:13:39.723 13:33:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:39.724 13:33:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.983 13:33:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:13:39.983 13:33:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:40.242 [2024-11-20 13:33:39.552533] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:13:41.178 13:33:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:13:41.178 13:33:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.178 13:33:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.178 [2024-11-20 13:33:40.464096] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:13:41.178 [2024-11-20 13:33:40.464159] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:41.178 [2024-11-20 13:33:40.464353] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d0000063c0 00:13:41.178 13:33:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.178 13:33:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:13:41.178 13:33:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:13:41.178 13:33:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:13:41.178 13:33:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=1 00:13:41.178 13:33:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:41.178 13:33:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:41.178 13:33:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:41.178 13:33:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:41.178 13:33:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:41.178 13:33:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:41.179 13:33:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:41.179 13:33:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:41.179 13:33:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:41.179 13:33:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:41.179 13:33:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:41.179 13:33:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.179 13:33:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.179 13:33:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:41.179 13:33:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.179 13:33:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:41.179 "name": "raid_bdev1", 00:13:41.179 "uuid": "2a1c4b63-a99a-4360-a82e-151b848ec885", 00:13:41.179 "strip_size_kb": 0, 00:13:41.179 "state": "online", 00:13:41.179 "raid_level": "raid1", 00:13:41.179 "superblock": true, 00:13:41.179 "num_base_bdevs": 2, 00:13:41.179 "num_base_bdevs_discovered": 1, 00:13:41.179 "num_base_bdevs_operational": 1, 00:13:41.179 "base_bdevs_list": [ 00:13:41.179 { 00:13:41.179 "name": null, 00:13:41.179 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:41.179 "is_configured": false, 00:13:41.179 "data_offset": 0, 00:13:41.179 "data_size": 63488 00:13:41.179 }, 00:13:41.179 { 00:13:41.179 "name": "BaseBdev2", 00:13:41.179 "uuid": "a19fba8a-a73a-54a3-a5bc-49205695fe74", 00:13:41.179 "is_configured": true, 00:13:41.179 "data_offset": 2048, 00:13:41.179 "data_size": 63488 00:13:41.179 } 00:13:41.179 ] 00:13:41.179 }' 00:13:41.179 13:33:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:41.179 13:33:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.439 13:33:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:41.439 13:33:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.439 13:33:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.439 [2024-11-20 13:33:40.869076] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:41.439 [2024-11-20 13:33:40.869107] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:41.439 [2024-11-20 13:33:40.871675] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:41.439 [2024-11-20 13:33:40.871718] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:41.439 [2024-11-20 13:33:40.871777] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:41.439 [2024-11-20 13:33:40.871791] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:13:41.439 { 00:13:41.439 "results": [ 00:13:41.439 { 00:13:41.439 "job": "raid_bdev1", 00:13:41.439 "core_mask": "0x1", 00:13:41.439 "workload": "randrw", 00:13:41.439 "percentage": 50, 00:13:41.439 "status": "finished", 00:13:41.439 "queue_depth": 1, 00:13:41.439 "io_size": 131072, 00:13:41.439 "runtime": 1.316507, 00:13:41.439 "iops": 21667.184450975194, 00:13:41.439 "mibps": 2708.398056371899, 00:13:41.439 "io_failed": 0, 00:13:41.439 "io_timeout": 0, 00:13:41.439 "avg_latency_us": 43.49801046781341, 00:13:41.439 "min_latency_us": 23.235341365461846, 00:13:41.439 "max_latency_us": 1434.4224899598394 00:13:41.439 } 00:13:41.439 ], 00:13:41.439 "core_count": 1 00:13:41.439 } 00:13:41.439 13:33:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.439 13:33:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 63449 00:13:41.439 13:33:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 63449 ']' 00:13:41.439 13:33:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 63449 00:13:41.439 13:33:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:13:41.439 13:33:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:41.439 13:33:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63449 00:13:41.439 killing process with pid 63449 00:13:41.439 13:33:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:41.439 13:33:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:41.439 13:33:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63449' 00:13:41.439 13:33:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 63449 00:13:41.439 [2024-11-20 13:33:40.913020] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:41.439 13:33:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 63449 00:13:41.698 [2024-11-20 13:33:41.052696] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:43.071 13:33:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.LurTS5686p 00:13:43.071 13:33:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:13:43.071 13:33:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:13:43.071 13:33:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:13:43.071 13:33:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:13:43.071 13:33:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:43.071 13:33:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:13:43.071 13:33:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:13:43.071 00:13:43.071 real 0m4.348s 00:13:43.071 user 0m5.131s 00:13:43.071 sys 0m0.615s 00:13:43.071 13:33:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:43.071 ************************************ 00:13:43.071 END TEST raid_write_error_test 00:13:43.071 ************************************ 00:13:43.071 13:33:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.071 13:33:42 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:13:43.071 13:33:42 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:13:43.072 13:33:42 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 3 false 00:13:43.072 13:33:42 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:13:43.072 13:33:42 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:43.072 13:33:42 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:43.072 ************************************ 00:13:43.072 START TEST raid_state_function_test 00:13:43.072 ************************************ 00:13:43.072 13:33:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 3 false 00:13:43.072 13:33:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:13:43.072 13:33:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:13:43.072 13:33:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:13:43.072 13:33:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:13:43.072 13:33:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:13:43.072 13:33:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:43.072 13:33:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:13:43.072 13:33:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:43.072 13:33:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:43.072 13:33:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:13:43.072 13:33:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:43.072 13:33:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:43.072 13:33:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:13:43.072 13:33:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:43.072 13:33:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:43.072 13:33:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:13:43.072 13:33:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:13:43.072 13:33:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:13:43.072 13:33:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:13:43.072 13:33:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:13:43.072 13:33:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:13:43.072 13:33:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:13:43.072 13:33:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:13:43.072 13:33:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:13:43.072 13:33:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:13:43.072 13:33:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:13:43.072 13:33:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=63588 00:13:43.072 13:33:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:13:43.072 Process raid pid: 63588 00:13:43.072 13:33:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 63588' 00:13:43.072 13:33:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 63588 00:13:43.072 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:43.072 13:33:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 63588 ']' 00:13:43.072 13:33:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:43.072 13:33:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:43.072 13:33:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:43.072 13:33:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:43.072 13:33:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.072 [2024-11-20 13:33:42.434521] Starting SPDK v25.01-pre git sha1 82b85d9ca / DPDK 24.03.0 initialization... 00:13:43.072 [2024-11-20 13:33:42.434842] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:43.329 [2024-11-20 13:33:42.619547] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:43.329 [2024-11-20 13:33:42.736096] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:43.587 [2024-11-20 13:33:42.943305] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:43.587 [2024-11-20 13:33:42.943519] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:43.846 13:33:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:43.846 13:33:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:13:43.846 13:33:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:13:43.846 13:33:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.846 13:33:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.846 [2024-11-20 13:33:43.268176] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:43.846 [2024-11-20 13:33:43.268742] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:43.846 [2024-11-20 13:33:43.268773] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:43.846 [2024-11-20 13:33:43.268795] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:43.846 [2024-11-20 13:33:43.268805] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:43.846 [2024-11-20 13:33:43.268820] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:43.846 13:33:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.846 13:33:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:13:43.846 13:33:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:43.846 13:33:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:43.846 13:33:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:43.846 13:33:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:43.846 13:33:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:43.846 13:33:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:43.846 13:33:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:43.846 13:33:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:43.846 13:33:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:43.846 13:33:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:43.846 13:33:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:43.846 13:33:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.846 13:33:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.846 13:33:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.846 13:33:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:43.846 "name": "Existed_Raid", 00:13:43.846 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:43.846 "strip_size_kb": 64, 00:13:43.846 "state": "configuring", 00:13:43.846 "raid_level": "raid0", 00:13:43.846 "superblock": false, 00:13:43.846 "num_base_bdevs": 3, 00:13:43.846 "num_base_bdevs_discovered": 0, 00:13:43.846 "num_base_bdevs_operational": 3, 00:13:43.846 "base_bdevs_list": [ 00:13:43.846 { 00:13:43.846 "name": "BaseBdev1", 00:13:43.846 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:43.846 "is_configured": false, 00:13:43.846 "data_offset": 0, 00:13:43.846 "data_size": 0 00:13:43.846 }, 00:13:43.846 { 00:13:43.846 "name": "BaseBdev2", 00:13:43.846 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:43.846 "is_configured": false, 00:13:43.846 "data_offset": 0, 00:13:43.846 "data_size": 0 00:13:43.846 }, 00:13:43.846 { 00:13:43.846 "name": "BaseBdev3", 00:13:43.846 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:43.846 "is_configured": false, 00:13:43.846 "data_offset": 0, 00:13:43.846 "data_size": 0 00:13:43.846 } 00:13:43.846 ] 00:13:43.846 }' 00:13:43.846 13:33:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:43.846 13:33:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.412 13:33:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:44.412 13:33:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.412 13:33:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.412 [2024-11-20 13:33:43.687452] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:44.412 [2024-11-20 13:33:43.687491] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:13:44.412 13:33:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.413 13:33:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:13:44.413 13:33:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.413 13:33:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.413 [2024-11-20 13:33:43.699422] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:44.413 [2024-11-20 13:33:43.699473] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:44.413 [2024-11-20 13:33:43.699484] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:44.413 [2024-11-20 13:33:43.699497] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:44.413 [2024-11-20 13:33:43.699505] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:44.413 [2024-11-20 13:33:43.699517] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:44.413 13:33:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.413 13:33:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:44.413 13:33:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.413 13:33:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.413 [2024-11-20 13:33:43.749611] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:44.413 BaseBdev1 00:13:44.413 13:33:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.413 13:33:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:13:44.413 13:33:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:13:44.413 13:33:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:44.413 13:33:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:44.413 13:33:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:44.413 13:33:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:44.413 13:33:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:44.413 13:33:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.413 13:33:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.413 13:33:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.413 13:33:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:44.413 13:33:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.413 13:33:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.413 [ 00:13:44.413 { 00:13:44.413 "name": "BaseBdev1", 00:13:44.413 "aliases": [ 00:13:44.413 "bf8f8bde-834b-4033-8309-12895f69b2cc" 00:13:44.413 ], 00:13:44.413 "product_name": "Malloc disk", 00:13:44.413 "block_size": 512, 00:13:44.413 "num_blocks": 65536, 00:13:44.413 "uuid": "bf8f8bde-834b-4033-8309-12895f69b2cc", 00:13:44.413 "assigned_rate_limits": { 00:13:44.413 "rw_ios_per_sec": 0, 00:13:44.413 "rw_mbytes_per_sec": 0, 00:13:44.413 "r_mbytes_per_sec": 0, 00:13:44.413 "w_mbytes_per_sec": 0 00:13:44.413 }, 00:13:44.413 "claimed": true, 00:13:44.413 "claim_type": "exclusive_write", 00:13:44.413 "zoned": false, 00:13:44.413 "supported_io_types": { 00:13:44.413 "read": true, 00:13:44.413 "write": true, 00:13:44.413 "unmap": true, 00:13:44.413 "flush": true, 00:13:44.413 "reset": true, 00:13:44.413 "nvme_admin": false, 00:13:44.413 "nvme_io": false, 00:13:44.413 "nvme_io_md": false, 00:13:44.413 "write_zeroes": true, 00:13:44.413 "zcopy": true, 00:13:44.413 "get_zone_info": false, 00:13:44.413 "zone_management": false, 00:13:44.413 "zone_append": false, 00:13:44.413 "compare": false, 00:13:44.413 "compare_and_write": false, 00:13:44.413 "abort": true, 00:13:44.413 "seek_hole": false, 00:13:44.413 "seek_data": false, 00:13:44.413 "copy": true, 00:13:44.413 "nvme_iov_md": false 00:13:44.413 }, 00:13:44.413 "memory_domains": [ 00:13:44.413 { 00:13:44.413 "dma_device_id": "system", 00:13:44.413 "dma_device_type": 1 00:13:44.413 }, 00:13:44.413 { 00:13:44.413 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:44.413 "dma_device_type": 2 00:13:44.413 } 00:13:44.413 ], 00:13:44.413 "driver_specific": {} 00:13:44.413 } 00:13:44.413 ] 00:13:44.413 13:33:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.413 13:33:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:44.413 13:33:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:13:44.413 13:33:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:44.413 13:33:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:44.413 13:33:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:44.413 13:33:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:44.413 13:33:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:44.413 13:33:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:44.413 13:33:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:44.413 13:33:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:44.413 13:33:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:44.413 13:33:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:44.413 13:33:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.413 13:33:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:44.413 13:33:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.413 13:33:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.413 13:33:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:44.413 "name": "Existed_Raid", 00:13:44.413 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:44.413 "strip_size_kb": 64, 00:13:44.413 "state": "configuring", 00:13:44.413 "raid_level": "raid0", 00:13:44.413 "superblock": false, 00:13:44.413 "num_base_bdevs": 3, 00:13:44.413 "num_base_bdevs_discovered": 1, 00:13:44.413 "num_base_bdevs_operational": 3, 00:13:44.413 "base_bdevs_list": [ 00:13:44.413 { 00:13:44.413 "name": "BaseBdev1", 00:13:44.413 "uuid": "bf8f8bde-834b-4033-8309-12895f69b2cc", 00:13:44.413 "is_configured": true, 00:13:44.413 "data_offset": 0, 00:13:44.413 "data_size": 65536 00:13:44.413 }, 00:13:44.413 { 00:13:44.413 "name": "BaseBdev2", 00:13:44.413 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:44.413 "is_configured": false, 00:13:44.413 "data_offset": 0, 00:13:44.413 "data_size": 0 00:13:44.413 }, 00:13:44.413 { 00:13:44.413 "name": "BaseBdev3", 00:13:44.413 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:44.413 "is_configured": false, 00:13:44.413 "data_offset": 0, 00:13:44.413 "data_size": 0 00:13:44.413 } 00:13:44.413 ] 00:13:44.413 }' 00:13:44.413 13:33:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:44.413 13:33:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.981 13:33:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:44.981 13:33:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.981 13:33:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.981 [2024-11-20 13:33:44.185108] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:44.981 [2024-11-20 13:33:44.185168] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:13:44.981 13:33:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.981 13:33:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:13:44.981 13:33:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.981 13:33:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.981 [2024-11-20 13:33:44.197161] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:44.981 [2024-11-20 13:33:44.199494] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:44.981 [2024-11-20 13:33:44.199547] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:44.981 [2024-11-20 13:33:44.199560] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:44.981 [2024-11-20 13:33:44.199574] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:44.981 13:33:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.981 13:33:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:13:44.981 13:33:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:44.981 13:33:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:13:44.981 13:33:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:44.981 13:33:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:44.981 13:33:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:44.981 13:33:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:44.981 13:33:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:44.981 13:33:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:44.981 13:33:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:44.981 13:33:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:44.981 13:33:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:44.981 13:33:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:44.981 13:33:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:44.981 13:33:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.981 13:33:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.981 13:33:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.981 13:33:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:44.981 "name": "Existed_Raid", 00:13:44.981 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:44.981 "strip_size_kb": 64, 00:13:44.981 "state": "configuring", 00:13:44.981 "raid_level": "raid0", 00:13:44.981 "superblock": false, 00:13:44.981 "num_base_bdevs": 3, 00:13:44.981 "num_base_bdevs_discovered": 1, 00:13:44.981 "num_base_bdevs_operational": 3, 00:13:44.981 "base_bdevs_list": [ 00:13:44.981 { 00:13:44.981 "name": "BaseBdev1", 00:13:44.981 "uuid": "bf8f8bde-834b-4033-8309-12895f69b2cc", 00:13:44.981 "is_configured": true, 00:13:44.981 "data_offset": 0, 00:13:44.981 "data_size": 65536 00:13:44.981 }, 00:13:44.981 { 00:13:44.981 "name": "BaseBdev2", 00:13:44.981 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:44.981 "is_configured": false, 00:13:44.981 "data_offset": 0, 00:13:44.981 "data_size": 0 00:13:44.981 }, 00:13:44.981 { 00:13:44.981 "name": "BaseBdev3", 00:13:44.981 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:44.981 "is_configured": false, 00:13:44.981 "data_offset": 0, 00:13:44.981 "data_size": 0 00:13:44.981 } 00:13:44.981 ] 00:13:44.981 }' 00:13:44.981 13:33:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:44.981 13:33:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.241 13:33:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:45.241 13:33:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.241 13:33:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.241 [2024-11-20 13:33:44.680711] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:45.241 BaseBdev2 00:13:45.241 13:33:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.241 13:33:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:13:45.241 13:33:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:13:45.241 13:33:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:45.241 13:33:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:45.241 13:33:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:45.241 13:33:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:45.241 13:33:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:45.241 13:33:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.241 13:33:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.241 13:33:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.241 13:33:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:45.241 13:33:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.241 13:33:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.241 [ 00:13:45.241 { 00:13:45.241 "name": "BaseBdev2", 00:13:45.241 "aliases": [ 00:13:45.241 "c3e0bcb8-0f7c-49b9-b606-5ecda556eb7d" 00:13:45.241 ], 00:13:45.241 "product_name": "Malloc disk", 00:13:45.241 "block_size": 512, 00:13:45.241 "num_blocks": 65536, 00:13:45.241 "uuid": "c3e0bcb8-0f7c-49b9-b606-5ecda556eb7d", 00:13:45.241 "assigned_rate_limits": { 00:13:45.241 "rw_ios_per_sec": 0, 00:13:45.241 "rw_mbytes_per_sec": 0, 00:13:45.241 "r_mbytes_per_sec": 0, 00:13:45.241 "w_mbytes_per_sec": 0 00:13:45.241 }, 00:13:45.241 "claimed": true, 00:13:45.241 "claim_type": "exclusive_write", 00:13:45.241 "zoned": false, 00:13:45.241 "supported_io_types": { 00:13:45.241 "read": true, 00:13:45.241 "write": true, 00:13:45.241 "unmap": true, 00:13:45.241 "flush": true, 00:13:45.241 "reset": true, 00:13:45.241 "nvme_admin": false, 00:13:45.241 "nvme_io": false, 00:13:45.241 "nvme_io_md": false, 00:13:45.241 "write_zeroes": true, 00:13:45.241 "zcopy": true, 00:13:45.241 "get_zone_info": false, 00:13:45.241 "zone_management": false, 00:13:45.241 "zone_append": false, 00:13:45.241 "compare": false, 00:13:45.241 "compare_and_write": false, 00:13:45.241 "abort": true, 00:13:45.241 "seek_hole": false, 00:13:45.241 "seek_data": false, 00:13:45.241 "copy": true, 00:13:45.241 "nvme_iov_md": false 00:13:45.241 }, 00:13:45.241 "memory_domains": [ 00:13:45.241 { 00:13:45.241 "dma_device_id": "system", 00:13:45.241 "dma_device_type": 1 00:13:45.241 }, 00:13:45.241 { 00:13:45.241 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:45.241 "dma_device_type": 2 00:13:45.500 } 00:13:45.500 ], 00:13:45.500 "driver_specific": {} 00:13:45.500 } 00:13:45.500 ] 00:13:45.501 13:33:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.501 13:33:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:45.501 13:33:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:45.501 13:33:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:45.501 13:33:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:13:45.501 13:33:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:45.501 13:33:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:45.501 13:33:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:45.501 13:33:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:45.501 13:33:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:45.501 13:33:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:45.501 13:33:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:45.501 13:33:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:45.501 13:33:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:45.501 13:33:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:45.501 13:33:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:45.501 13:33:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.501 13:33:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.501 13:33:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.501 13:33:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:45.501 "name": "Existed_Raid", 00:13:45.501 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:45.501 "strip_size_kb": 64, 00:13:45.501 "state": "configuring", 00:13:45.501 "raid_level": "raid0", 00:13:45.501 "superblock": false, 00:13:45.501 "num_base_bdevs": 3, 00:13:45.501 "num_base_bdevs_discovered": 2, 00:13:45.501 "num_base_bdevs_operational": 3, 00:13:45.501 "base_bdevs_list": [ 00:13:45.501 { 00:13:45.501 "name": "BaseBdev1", 00:13:45.501 "uuid": "bf8f8bde-834b-4033-8309-12895f69b2cc", 00:13:45.501 "is_configured": true, 00:13:45.501 "data_offset": 0, 00:13:45.501 "data_size": 65536 00:13:45.501 }, 00:13:45.501 { 00:13:45.501 "name": "BaseBdev2", 00:13:45.501 "uuid": "c3e0bcb8-0f7c-49b9-b606-5ecda556eb7d", 00:13:45.501 "is_configured": true, 00:13:45.501 "data_offset": 0, 00:13:45.501 "data_size": 65536 00:13:45.501 }, 00:13:45.501 { 00:13:45.501 "name": "BaseBdev3", 00:13:45.501 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:45.501 "is_configured": false, 00:13:45.501 "data_offset": 0, 00:13:45.501 "data_size": 0 00:13:45.501 } 00:13:45.501 ] 00:13:45.501 }' 00:13:45.501 13:33:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:45.501 13:33:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.761 13:33:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:13:45.761 13:33:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.761 13:33:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.761 [2024-11-20 13:33:45.193782] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:45.761 [2024-11-20 13:33:45.193831] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:13:45.761 [2024-11-20 13:33:45.193848] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:13:45.761 [2024-11-20 13:33:45.194158] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:13:45.761 [2024-11-20 13:33:45.194349] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:13:45.761 [2024-11-20 13:33:45.194361] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:13:45.761 [2024-11-20 13:33:45.194629] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:45.761 BaseBdev3 00:13:45.761 13:33:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.761 13:33:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:13:45.761 13:33:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:13:45.761 13:33:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:45.761 13:33:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:45.761 13:33:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:45.761 13:33:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:45.761 13:33:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:45.761 13:33:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.761 13:33:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.761 13:33:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.761 13:33:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:45.761 13:33:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.761 13:33:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.761 [ 00:13:45.761 { 00:13:45.761 "name": "BaseBdev3", 00:13:45.761 "aliases": [ 00:13:45.761 "9f112c5b-2424-439c-a029-1fbe99a03963" 00:13:45.761 ], 00:13:45.761 "product_name": "Malloc disk", 00:13:45.761 "block_size": 512, 00:13:45.761 "num_blocks": 65536, 00:13:45.761 "uuid": "9f112c5b-2424-439c-a029-1fbe99a03963", 00:13:45.761 "assigned_rate_limits": { 00:13:45.761 "rw_ios_per_sec": 0, 00:13:45.761 "rw_mbytes_per_sec": 0, 00:13:45.761 "r_mbytes_per_sec": 0, 00:13:45.761 "w_mbytes_per_sec": 0 00:13:45.761 }, 00:13:45.761 "claimed": true, 00:13:45.761 "claim_type": "exclusive_write", 00:13:45.761 "zoned": false, 00:13:45.761 "supported_io_types": { 00:13:45.761 "read": true, 00:13:45.761 "write": true, 00:13:45.761 "unmap": true, 00:13:45.761 "flush": true, 00:13:45.761 "reset": true, 00:13:45.761 "nvme_admin": false, 00:13:45.761 "nvme_io": false, 00:13:45.761 "nvme_io_md": false, 00:13:45.761 "write_zeroes": true, 00:13:45.761 "zcopy": true, 00:13:45.761 "get_zone_info": false, 00:13:45.761 "zone_management": false, 00:13:45.761 "zone_append": false, 00:13:45.761 "compare": false, 00:13:45.761 "compare_and_write": false, 00:13:45.761 "abort": true, 00:13:45.761 "seek_hole": false, 00:13:45.761 "seek_data": false, 00:13:45.761 "copy": true, 00:13:45.761 "nvme_iov_md": false 00:13:45.761 }, 00:13:45.761 "memory_domains": [ 00:13:45.761 { 00:13:45.761 "dma_device_id": "system", 00:13:45.761 "dma_device_type": 1 00:13:45.761 }, 00:13:45.761 { 00:13:45.761 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:45.761 "dma_device_type": 2 00:13:45.761 } 00:13:45.761 ], 00:13:45.761 "driver_specific": {} 00:13:45.761 } 00:13:45.761 ] 00:13:45.761 13:33:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.761 13:33:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:45.761 13:33:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:45.761 13:33:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:45.761 13:33:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:13:45.761 13:33:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:45.761 13:33:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:45.761 13:33:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:45.761 13:33:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:45.761 13:33:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:45.761 13:33:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:45.761 13:33:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:45.761 13:33:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:45.761 13:33:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:46.076 13:33:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:46.076 13:33:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:46.076 13:33:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.076 13:33:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.076 13:33:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.076 13:33:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:46.076 "name": "Existed_Raid", 00:13:46.076 "uuid": "92bf0486-b975-404b-bfdf-43ecfb03e1e8", 00:13:46.076 "strip_size_kb": 64, 00:13:46.076 "state": "online", 00:13:46.076 "raid_level": "raid0", 00:13:46.076 "superblock": false, 00:13:46.076 "num_base_bdevs": 3, 00:13:46.076 "num_base_bdevs_discovered": 3, 00:13:46.076 "num_base_bdevs_operational": 3, 00:13:46.076 "base_bdevs_list": [ 00:13:46.076 { 00:13:46.076 "name": "BaseBdev1", 00:13:46.076 "uuid": "bf8f8bde-834b-4033-8309-12895f69b2cc", 00:13:46.076 "is_configured": true, 00:13:46.076 "data_offset": 0, 00:13:46.076 "data_size": 65536 00:13:46.076 }, 00:13:46.076 { 00:13:46.076 "name": "BaseBdev2", 00:13:46.076 "uuid": "c3e0bcb8-0f7c-49b9-b606-5ecda556eb7d", 00:13:46.076 "is_configured": true, 00:13:46.076 "data_offset": 0, 00:13:46.076 "data_size": 65536 00:13:46.076 }, 00:13:46.076 { 00:13:46.076 "name": "BaseBdev3", 00:13:46.076 "uuid": "9f112c5b-2424-439c-a029-1fbe99a03963", 00:13:46.076 "is_configured": true, 00:13:46.076 "data_offset": 0, 00:13:46.076 "data_size": 65536 00:13:46.076 } 00:13:46.076 ] 00:13:46.076 }' 00:13:46.076 13:33:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:46.076 13:33:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.335 13:33:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:13:46.335 13:33:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:46.335 13:33:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:46.335 13:33:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:46.335 13:33:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:46.335 13:33:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:46.335 13:33:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:46.335 13:33:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:46.335 13:33:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.335 13:33:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.335 [2024-11-20 13:33:45.609553] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:46.335 13:33:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.335 13:33:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:46.335 "name": "Existed_Raid", 00:13:46.335 "aliases": [ 00:13:46.335 "92bf0486-b975-404b-bfdf-43ecfb03e1e8" 00:13:46.335 ], 00:13:46.335 "product_name": "Raid Volume", 00:13:46.335 "block_size": 512, 00:13:46.335 "num_blocks": 196608, 00:13:46.335 "uuid": "92bf0486-b975-404b-bfdf-43ecfb03e1e8", 00:13:46.335 "assigned_rate_limits": { 00:13:46.335 "rw_ios_per_sec": 0, 00:13:46.335 "rw_mbytes_per_sec": 0, 00:13:46.335 "r_mbytes_per_sec": 0, 00:13:46.335 "w_mbytes_per_sec": 0 00:13:46.335 }, 00:13:46.335 "claimed": false, 00:13:46.335 "zoned": false, 00:13:46.335 "supported_io_types": { 00:13:46.335 "read": true, 00:13:46.335 "write": true, 00:13:46.335 "unmap": true, 00:13:46.335 "flush": true, 00:13:46.335 "reset": true, 00:13:46.335 "nvme_admin": false, 00:13:46.335 "nvme_io": false, 00:13:46.335 "nvme_io_md": false, 00:13:46.335 "write_zeroes": true, 00:13:46.335 "zcopy": false, 00:13:46.335 "get_zone_info": false, 00:13:46.335 "zone_management": false, 00:13:46.335 "zone_append": false, 00:13:46.335 "compare": false, 00:13:46.335 "compare_and_write": false, 00:13:46.335 "abort": false, 00:13:46.335 "seek_hole": false, 00:13:46.336 "seek_data": false, 00:13:46.336 "copy": false, 00:13:46.336 "nvme_iov_md": false 00:13:46.336 }, 00:13:46.336 "memory_domains": [ 00:13:46.336 { 00:13:46.336 "dma_device_id": "system", 00:13:46.336 "dma_device_type": 1 00:13:46.336 }, 00:13:46.336 { 00:13:46.336 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:46.336 "dma_device_type": 2 00:13:46.336 }, 00:13:46.336 { 00:13:46.336 "dma_device_id": "system", 00:13:46.336 "dma_device_type": 1 00:13:46.336 }, 00:13:46.336 { 00:13:46.336 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:46.336 "dma_device_type": 2 00:13:46.336 }, 00:13:46.336 { 00:13:46.336 "dma_device_id": "system", 00:13:46.336 "dma_device_type": 1 00:13:46.336 }, 00:13:46.336 { 00:13:46.336 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:46.336 "dma_device_type": 2 00:13:46.336 } 00:13:46.336 ], 00:13:46.336 "driver_specific": { 00:13:46.336 "raid": { 00:13:46.336 "uuid": "92bf0486-b975-404b-bfdf-43ecfb03e1e8", 00:13:46.336 "strip_size_kb": 64, 00:13:46.336 "state": "online", 00:13:46.336 "raid_level": "raid0", 00:13:46.336 "superblock": false, 00:13:46.336 "num_base_bdevs": 3, 00:13:46.336 "num_base_bdevs_discovered": 3, 00:13:46.336 "num_base_bdevs_operational": 3, 00:13:46.336 "base_bdevs_list": [ 00:13:46.336 { 00:13:46.336 "name": "BaseBdev1", 00:13:46.336 "uuid": "bf8f8bde-834b-4033-8309-12895f69b2cc", 00:13:46.336 "is_configured": true, 00:13:46.336 "data_offset": 0, 00:13:46.336 "data_size": 65536 00:13:46.336 }, 00:13:46.336 { 00:13:46.336 "name": "BaseBdev2", 00:13:46.336 "uuid": "c3e0bcb8-0f7c-49b9-b606-5ecda556eb7d", 00:13:46.336 "is_configured": true, 00:13:46.336 "data_offset": 0, 00:13:46.336 "data_size": 65536 00:13:46.336 }, 00:13:46.336 { 00:13:46.336 "name": "BaseBdev3", 00:13:46.336 "uuid": "9f112c5b-2424-439c-a029-1fbe99a03963", 00:13:46.336 "is_configured": true, 00:13:46.336 "data_offset": 0, 00:13:46.336 "data_size": 65536 00:13:46.336 } 00:13:46.336 ] 00:13:46.336 } 00:13:46.336 } 00:13:46.336 }' 00:13:46.336 13:33:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:46.336 13:33:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:13:46.336 BaseBdev2 00:13:46.336 BaseBdev3' 00:13:46.336 13:33:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:46.336 13:33:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:46.336 13:33:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:46.336 13:33:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:13:46.336 13:33:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:46.336 13:33:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.336 13:33:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.336 13:33:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.336 13:33:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:46.336 13:33:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:46.336 13:33:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:46.336 13:33:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:46.336 13:33:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:46.336 13:33:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.336 13:33:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.336 13:33:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.336 13:33:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:46.336 13:33:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:46.336 13:33:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:46.336 13:33:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:13:46.336 13:33:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.336 13:33:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.336 13:33:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:46.596 13:33:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.596 13:33:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:46.596 13:33:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:46.596 13:33:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:46.596 13:33:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.596 13:33:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.596 [2024-11-20 13:33:45.852984] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:46.596 [2024-11-20 13:33:45.853021] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:46.596 [2024-11-20 13:33:45.853101] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:46.596 13:33:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.596 13:33:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:13:46.596 13:33:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:13:46.596 13:33:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:46.596 13:33:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:13:46.596 13:33:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:13:46.596 13:33:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:13:46.596 13:33:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:46.596 13:33:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:13:46.596 13:33:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:46.596 13:33:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:46.596 13:33:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:46.596 13:33:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:46.596 13:33:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:46.596 13:33:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:46.596 13:33:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:46.596 13:33:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:46.596 13:33:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:46.596 13:33:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.596 13:33:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.596 13:33:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.596 13:33:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:46.596 "name": "Existed_Raid", 00:13:46.596 "uuid": "92bf0486-b975-404b-bfdf-43ecfb03e1e8", 00:13:46.596 "strip_size_kb": 64, 00:13:46.596 "state": "offline", 00:13:46.596 "raid_level": "raid0", 00:13:46.596 "superblock": false, 00:13:46.596 "num_base_bdevs": 3, 00:13:46.596 "num_base_bdevs_discovered": 2, 00:13:46.596 "num_base_bdevs_operational": 2, 00:13:46.596 "base_bdevs_list": [ 00:13:46.596 { 00:13:46.596 "name": null, 00:13:46.596 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:46.596 "is_configured": false, 00:13:46.596 "data_offset": 0, 00:13:46.596 "data_size": 65536 00:13:46.596 }, 00:13:46.596 { 00:13:46.596 "name": "BaseBdev2", 00:13:46.596 "uuid": "c3e0bcb8-0f7c-49b9-b606-5ecda556eb7d", 00:13:46.596 "is_configured": true, 00:13:46.596 "data_offset": 0, 00:13:46.596 "data_size": 65536 00:13:46.596 }, 00:13:46.596 { 00:13:46.596 "name": "BaseBdev3", 00:13:46.597 "uuid": "9f112c5b-2424-439c-a029-1fbe99a03963", 00:13:46.597 "is_configured": true, 00:13:46.597 "data_offset": 0, 00:13:46.597 "data_size": 65536 00:13:46.597 } 00:13:46.597 ] 00:13:46.597 }' 00:13:46.597 13:33:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:46.597 13:33:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.165 13:33:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:13:47.165 13:33:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:47.165 13:33:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:47.165 13:33:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.165 13:33:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.165 13:33:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:47.165 13:33:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.165 13:33:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:47.165 13:33:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:47.165 13:33:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:13:47.165 13:33:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.165 13:33:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.165 [2024-11-20 13:33:46.440264] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:47.165 13:33:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.165 13:33:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:47.165 13:33:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:47.165 13:33:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:47.165 13:33:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.165 13:33:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.165 13:33:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:47.165 13:33:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.165 13:33:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:47.165 13:33:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:47.165 13:33:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:13:47.165 13:33:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.165 13:33:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.165 [2024-11-20 13:33:46.595686] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:47.165 [2024-11-20 13:33:46.595747] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:13:47.422 13:33:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.422 13:33:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:47.422 13:33:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:47.422 13:33:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:47.422 13:33:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.422 13:33:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.422 13:33:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:13:47.422 13:33:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.422 13:33:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:13:47.422 13:33:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:13:47.422 13:33:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:13:47.422 13:33:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:13:47.422 13:33:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:47.422 13:33:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:47.422 13:33:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.422 13:33:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.422 BaseBdev2 00:13:47.422 13:33:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.422 13:33:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:13:47.422 13:33:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:13:47.422 13:33:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:47.422 13:33:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:47.422 13:33:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:47.422 13:33:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:47.422 13:33:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:47.422 13:33:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.422 13:33:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.422 13:33:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.422 13:33:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:47.422 13:33:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.422 13:33:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.422 [ 00:13:47.422 { 00:13:47.422 "name": "BaseBdev2", 00:13:47.422 "aliases": [ 00:13:47.422 "6f63f88c-71a9-4c2f-93a9-1c98a40493fb" 00:13:47.422 ], 00:13:47.422 "product_name": "Malloc disk", 00:13:47.422 "block_size": 512, 00:13:47.422 "num_blocks": 65536, 00:13:47.422 "uuid": "6f63f88c-71a9-4c2f-93a9-1c98a40493fb", 00:13:47.422 "assigned_rate_limits": { 00:13:47.422 "rw_ios_per_sec": 0, 00:13:47.422 "rw_mbytes_per_sec": 0, 00:13:47.422 "r_mbytes_per_sec": 0, 00:13:47.422 "w_mbytes_per_sec": 0 00:13:47.422 }, 00:13:47.422 "claimed": false, 00:13:47.422 "zoned": false, 00:13:47.422 "supported_io_types": { 00:13:47.422 "read": true, 00:13:47.422 "write": true, 00:13:47.422 "unmap": true, 00:13:47.422 "flush": true, 00:13:47.422 "reset": true, 00:13:47.422 "nvme_admin": false, 00:13:47.422 "nvme_io": false, 00:13:47.422 "nvme_io_md": false, 00:13:47.422 "write_zeroes": true, 00:13:47.422 "zcopy": true, 00:13:47.422 "get_zone_info": false, 00:13:47.422 "zone_management": false, 00:13:47.422 "zone_append": false, 00:13:47.422 "compare": false, 00:13:47.422 "compare_and_write": false, 00:13:47.422 "abort": true, 00:13:47.422 "seek_hole": false, 00:13:47.422 "seek_data": false, 00:13:47.422 "copy": true, 00:13:47.422 "nvme_iov_md": false 00:13:47.422 }, 00:13:47.422 "memory_domains": [ 00:13:47.422 { 00:13:47.422 "dma_device_id": "system", 00:13:47.422 "dma_device_type": 1 00:13:47.422 }, 00:13:47.422 { 00:13:47.422 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:47.422 "dma_device_type": 2 00:13:47.422 } 00:13:47.422 ], 00:13:47.422 "driver_specific": {} 00:13:47.422 } 00:13:47.422 ] 00:13:47.422 13:33:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.422 13:33:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:47.422 13:33:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:47.422 13:33:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:47.422 13:33:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:13:47.422 13:33:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.422 13:33:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.422 BaseBdev3 00:13:47.422 13:33:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.422 13:33:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:13:47.422 13:33:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:13:47.422 13:33:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:47.422 13:33:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:47.422 13:33:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:47.422 13:33:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:47.422 13:33:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:47.422 13:33:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.422 13:33:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.422 13:33:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.422 13:33:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:47.422 13:33:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.422 13:33:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.422 [ 00:13:47.422 { 00:13:47.422 "name": "BaseBdev3", 00:13:47.422 "aliases": [ 00:13:47.422 "64c1206b-8a50-4e3a-aaa3-b035d5a9ce14" 00:13:47.422 ], 00:13:47.422 "product_name": "Malloc disk", 00:13:47.422 "block_size": 512, 00:13:47.422 "num_blocks": 65536, 00:13:47.679 "uuid": "64c1206b-8a50-4e3a-aaa3-b035d5a9ce14", 00:13:47.679 "assigned_rate_limits": { 00:13:47.679 "rw_ios_per_sec": 0, 00:13:47.679 "rw_mbytes_per_sec": 0, 00:13:47.679 "r_mbytes_per_sec": 0, 00:13:47.679 "w_mbytes_per_sec": 0 00:13:47.679 }, 00:13:47.679 "claimed": false, 00:13:47.679 "zoned": false, 00:13:47.679 "supported_io_types": { 00:13:47.679 "read": true, 00:13:47.679 "write": true, 00:13:47.679 "unmap": true, 00:13:47.679 "flush": true, 00:13:47.679 "reset": true, 00:13:47.679 "nvme_admin": false, 00:13:47.679 "nvme_io": false, 00:13:47.679 "nvme_io_md": false, 00:13:47.679 "write_zeroes": true, 00:13:47.679 "zcopy": true, 00:13:47.679 "get_zone_info": false, 00:13:47.679 "zone_management": false, 00:13:47.679 "zone_append": false, 00:13:47.679 "compare": false, 00:13:47.679 "compare_and_write": false, 00:13:47.679 "abort": true, 00:13:47.679 "seek_hole": false, 00:13:47.679 "seek_data": false, 00:13:47.679 "copy": true, 00:13:47.679 "nvme_iov_md": false 00:13:47.679 }, 00:13:47.679 "memory_domains": [ 00:13:47.679 { 00:13:47.679 "dma_device_id": "system", 00:13:47.679 "dma_device_type": 1 00:13:47.679 }, 00:13:47.679 { 00:13:47.679 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:47.679 "dma_device_type": 2 00:13:47.679 } 00:13:47.679 ], 00:13:47.679 "driver_specific": {} 00:13:47.679 } 00:13:47.679 ] 00:13:47.679 13:33:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.679 13:33:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:47.679 13:33:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:47.679 13:33:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:47.679 13:33:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:13:47.679 13:33:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.679 13:33:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.679 [2024-11-20 13:33:46.930143] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:47.679 [2024-11-20 13:33:46.930197] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:47.679 [2024-11-20 13:33:46.930225] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:47.679 [2024-11-20 13:33:46.932383] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:47.679 13:33:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.679 13:33:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:13:47.679 13:33:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:47.679 13:33:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:47.679 13:33:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:47.679 13:33:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:47.679 13:33:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:47.679 13:33:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:47.679 13:33:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:47.679 13:33:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:47.679 13:33:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:47.679 13:33:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:47.679 13:33:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:47.679 13:33:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.679 13:33:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.679 13:33:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.679 13:33:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:47.679 "name": "Existed_Raid", 00:13:47.679 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:47.679 "strip_size_kb": 64, 00:13:47.679 "state": "configuring", 00:13:47.679 "raid_level": "raid0", 00:13:47.679 "superblock": false, 00:13:47.679 "num_base_bdevs": 3, 00:13:47.679 "num_base_bdevs_discovered": 2, 00:13:47.679 "num_base_bdevs_operational": 3, 00:13:47.679 "base_bdevs_list": [ 00:13:47.679 { 00:13:47.679 "name": "BaseBdev1", 00:13:47.679 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:47.679 "is_configured": false, 00:13:47.679 "data_offset": 0, 00:13:47.679 "data_size": 0 00:13:47.679 }, 00:13:47.679 { 00:13:47.679 "name": "BaseBdev2", 00:13:47.679 "uuid": "6f63f88c-71a9-4c2f-93a9-1c98a40493fb", 00:13:47.679 "is_configured": true, 00:13:47.679 "data_offset": 0, 00:13:47.679 "data_size": 65536 00:13:47.679 }, 00:13:47.679 { 00:13:47.679 "name": "BaseBdev3", 00:13:47.679 "uuid": "64c1206b-8a50-4e3a-aaa3-b035d5a9ce14", 00:13:47.679 "is_configured": true, 00:13:47.679 "data_offset": 0, 00:13:47.679 "data_size": 65536 00:13:47.679 } 00:13:47.679 ] 00:13:47.679 }' 00:13:47.679 13:33:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:47.679 13:33:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.938 13:33:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:13:47.938 13:33:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.938 13:33:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.938 [2024-11-20 13:33:47.349544] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:47.938 13:33:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.938 13:33:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:13:47.938 13:33:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:47.938 13:33:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:47.938 13:33:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:47.938 13:33:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:47.938 13:33:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:47.938 13:33:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:47.938 13:33:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:47.938 13:33:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:47.938 13:33:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:47.938 13:33:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:47.938 13:33:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:47.938 13:33:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.938 13:33:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.938 13:33:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.938 13:33:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:47.938 "name": "Existed_Raid", 00:13:47.938 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:47.938 "strip_size_kb": 64, 00:13:47.938 "state": "configuring", 00:13:47.938 "raid_level": "raid0", 00:13:47.938 "superblock": false, 00:13:47.938 "num_base_bdevs": 3, 00:13:47.938 "num_base_bdevs_discovered": 1, 00:13:47.938 "num_base_bdevs_operational": 3, 00:13:47.938 "base_bdevs_list": [ 00:13:47.938 { 00:13:47.938 "name": "BaseBdev1", 00:13:47.938 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:47.938 "is_configured": false, 00:13:47.938 "data_offset": 0, 00:13:47.938 "data_size": 0 00:13:47.938 }, 00:13:47.938 { 00:13:47.938 "name": null, 00:13:47.938 "uuid": "6f63f88c-71a9-4c2f-93a9-1c98a40493fb", 00:13:47.938 "is_configured": false, 00:13:47.938 "data_offset": 0, 00:13:47.938 "data_size": 65536 00:13:47.938 }, 00:13:47.938 { 00:13:47.938 "name": "BaseBdev3", 00:13:47.938 "uuid": "64c1206b-8a50-4e3a-aaa3-b035d5a9ce14", 00:13:47.938 "is_configured": true, 00:13:47.938 "data_offset": 0, 00:13:47.938 "data_size": 65536 00:13:47.938 } 00:13:47.938 ] 00:13:47.938 }' 00:13:47.938 13:33:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:47.938 13:33:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:48.505 13:33:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:48.505 13:33:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.505 13:33:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:48.505 13:33:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:48.505 13:33:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.505 13:33:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:13:48.505 13:33:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:48.505 13:33:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.505 13:33:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:48.505 [2024-11-20 13:33:47.883998] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:48.505 BaseBdev1 00:13:48.505 13:33:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.505 13:33:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:13:48.505 13:33:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:13:48.505 13:33:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:48.505 13:33:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:48.505 13:33:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:48.505 13:33:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:48.505 13:33:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:48.505 13:33:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.505 13:33:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:48.505 13:33:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.506 13:33:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:48.506 13:33:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.506 13:33:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:48.506 [ 00:13:48.506 { 00:13:48.506 "name": "BaseBdev1", 00:13:48.506 "aliases": [ 00:13:48.506 "76eab5d3-5371-4640-8d6b-946469f9a9af" 00:13:48.506 ], 00:13:48.506 "product_name": "Malloc disk", 00:13:48.506 "block_size": 512, 00:13:48.506 "num_blocks": 65536, 00:13:48.506 "uuid": "76eab5d3-5371-4640-8d6b-946469f9a9af", 00:13:48.506 "assigned_rate_limits": { 00:13:48.506 "rw_ios_per_sec": 0, 00:13:48.506 "rw_mbytes_per_sec": 0, 00:13:48.506 "r_mbytes_per_sec": 0, 00:13:48.506 "w_mbytes_per_sec": 0 00:13:48.506 }, 00:13:48.506 "claimed": true, 00:13:48.506 "claim_type": "exclusive_write", 00:13:48.506 "zoned": false, 00:13:48.506 "supported_io_types": { 00:13:48.506 "read": true, 00:13:48.506 "write": true, 00:13:48.506 "unmap": true, 00:13:48.506 "flush": true, 00:13:48.506 "reset": true, 00:13:48.506 "nvme_admin": false, 00:13:48.506 "nvme_io": false, 00:13:48.506 "nvme_io_md": false, 00:13:48.506 "write_zeroes": true, 00:13:48.506 "zcopy": true, 00:13:48.506 "get_zone_info": false, 00:13:48.506 "zone_management": false, 00:13:48.506 "zone_append": false, 00:13:48.506 "compare": false, 00:13:48.506 "compare_and_write": false, 00:13:48.506 "abort": true, 00:13:48.506 "seek_hole": false, 00:13:48.506 "seek_data": false, 00:13:48.506 "copy": true, 00:13:48.506 "nvme_iov_md": false 00:13:48.506 }, 00:13:48.506 "memory_domains": [ 00:13:48.506 { 00:13:48.506 "dma_device_id": "system", 00:13:48.506 "dma_device_type": 1 00:13:48.506 }, 00:13:48.506 { 00:13:48.506 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:48.506 "dma_device_type": 2 00:13:48.506 } 00:13:48.506 ], 00:13:48.506 "driver_specific": {} 00:13:48.506 } 00:13:48.506 ] 00:13:48.506 13:33:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.506 13:33:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:48.506 13:33:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:13:48.506 13:33:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:48.506 13:33:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:48.506 13:33:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:48.506 13:33:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:48.506 13:33:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:48.506 13:33:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:48.506 13:33:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:48.506 13:33:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:48.506 13:33:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:48.506 13:33:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:48.506 13:33:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:48.506 13:33:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.506 13:33:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:48.506 13:33:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.506 13:33:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:48.506 "name": "Existed_Raid", 00:13:48.506 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:48.506 "strip_size_kb": 64, 00:13:48.506 "state": "configuring", 00:13:48.506 "raid_level": "raid0", 00:13:48.506 "superblock": false, 00:13:48.506 "num_base_bdevs": 3, 00:13:48.506 "num_base_bdevs_discovered": 2, 00:13:48.506 "num_base_bdevs_operational": 3, 00:13:48.506 "base_bdevs_list": [ 00:13:48.506 { 00:13:48.506 "name": "BaseBdev1", 00:13:48.506 "uuid": "76eab5d3-5371-4640-8d6b-946469f9a9af", 00:13:48.506 "is_configured": true, 00:13:48.506 "data_offset": 0, 00:13:48.506 "data_size": 65536 00:13:48.506 }, 00:13:48.506 { 00:13:48.506 "name": null, 00:13:48.506 "uuid": "6f63f88c-71a9-4c2f-93a9-1c98a40493fb", 00:13:48.506 "is_configured": false, 00:13:48.506 "data_offset": 0, 00:13:48.506 "data_size": 65536 00:13:48.506 }, 00:13:48.506 { 00:13:48.506 "name": "BaseBdev3", 00:13:48.506 "uuid": "64c1206b-8a50-4e3a-aaa3-b035d5a9ce14", 00:13:48.506 "is_configured": true, 00:13:48.506 "data_offset": 0, 00:13:48.506 "data_size": 65536 00:13:48.506 } 00:13:48.506 ] 00:13:48.506 }' 00:13:48.506 13:33:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:48.506 13:33:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.074 13:33:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:49.075 13:33:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.075 13:33:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.075 13:33:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:49.075 13:33:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.075 13:33:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:13:49.075 13:33:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:13:49.075 13:33:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.075 13:33:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.075 [2024-11-20 13:33:48.375372] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:49.075 13:33:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.075 13:33:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:13:49.075 13:33:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:49.075 13:33:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:49.075 13:33:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:49.075 13:33:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:49.075 13:33:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:49.075 13:33:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:49.075 13:33:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:49.075 13:33:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:49.075 13:33:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:49.075 13:33:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:49.075 13:33:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:49.075 13:33:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.075 13:33:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.075 13:33:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.075 13:33:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:49.075 "name": "Existed_Raid", 00:13:49.075 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:49.075 "strip_size_kb": 64, 00:13:49.075 "state": "configuring", 00:13:49.075 "raid_level": "raid0", 00:13:49.075 "superblock": false, 00:13:49.075 "num_base_bdevs": 3, 00:13:49.075 "num_base_bdevs_discovered": 1, 00:13:49.075 "num_base_bdevs_operational": 3, 00:13:49.075 "base_bdevs_list": [ 00:13:49.075 { 00:13:49.075 "name": "BaseBdev1", 00:13:49.075 "uuid": "76eab5d3-5371-4640-8d6b-946469f9a9af", 00:13:49.075 "is_configured": true, 00:13:49.075 "data_offset": 0, 00:13:49.075 "data_size": 65536 00:13:49.075 }, 00:13:49.075 { 00:13:49.075 "name": null, 00:13:49.075 "uuid": "6f63f88c-71a9-4c2f-93a9-1c98a40493fb", 00:13:49.075 "is_configured": false, 00:13:49.075 "data_offset": 0, 00:13:49.075 "data_size": 65536 00:13:49.075 }, 00:13:49.075 { 00:13:49.075 "name": null, 00:13:49.075 "uuid": "64c1206b-8a50-4e3a-aaa3-b035d5a9ce14", 00:13:49.075 "is_configured": false, 00:13:49.075 "data_offset": 0, 00:13:49.075 "data_size": 65536 00:13:49.075 } 00:13:49.075 ] 00:13:49.075 }' 00:13:49.075 13:33:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:49.075 13:33:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.334 13:33:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:49.334 13:33:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:49.334 13:33:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.334 13:33:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.334 13:33:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.334 13:33:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:13:49.334 13:33:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:13:49.334 13:33:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.594 13:33:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.594 [2024-11-20 13:33:48.823217] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:49.594 13:33:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.594 13:33:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:13:49.594 13:33:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:49.594 13:33:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:49.594 13:33:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:49.594 13:33:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:49.594 13:33:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:49.594 13:33:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:49.594 13:33:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:49.594 13:33:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:49.594 13:33:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:49.594 13:33:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:49.594 13:33:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:49.594 13:33:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.594 13:33:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.594 13:33:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.594 13:33:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:49.594 "name": "Existed_Raid", 00:13:49.594 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:49.594 "strip_size_kb": 64, 00:13:49.594 "state": "configuring", 00:13:49.594 "raid_level": "raid0", 00:13:49.594 "superblock": false, 00:13:49.594 "num_base_bdevs": 3, 00:13:49.594 "num_base_bdevs_discovered": 2, 00:13:49.594 "num_base_bdevs_operational": 3, 00:13:49.594 "base_bdevs_list": [ 00:13:49.594 { 00:13:49.594 "name": "BaseBdev1", 00:13:49.594 "uuid": "76eab5d3-5371-4640-8d6b-946469f9a9af", 00:13:49.594 "is_configured": true, 00:13:49.594 "data_offset": 0, 00:13:49.594 "data_size": 65536 00:13:49.594 }, 00:13:49.594 { 00:13:49.594 "name": null, 00:13:49.594 "uuid": "6f63f88c-71a9-4c2f-93a9-1c98a40493fb", 00:13:49.594 "is_configured": false, 00:13:49.594 "data_offset": 0, 00:13:49.594 "data_size": 65536 00:13:49.594 }, 00:13:49.594 { 00:13:49.594 "name": "BaseBdev3", 00:13:49.594 "uuid": "64c1206b-8a50-4e3a-aaa3-b035d5a9ce14", 00:13:49.594 "is_configured": true, 00:13:49.594 "data_offset": 0, 00:13:49.594 "data_size": 65536 00:13:49.594 } 00:13:49.595 ] 00:13:49.595 }' 00:13:49.595 13:33:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:49.595 13:33:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.854 13:33:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:49.854 13:33:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:49.854 13:33:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.854 13:33:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.854 13:33:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.854 13:33:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:13:49.854 13:33:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:49.854 13:33:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.854 13:33:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.854 [2024-11-20 13:33:49.267098] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:50.113 13:33:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.113 13:33:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:13:50.113 13:33:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:50.113 13:33:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:50.113 13:33:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:50.113 13:33:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:50.113 13:33:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:50.113 13:33:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:50.113 13:33:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:50.113 13:33:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:50.113 13:33:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:50.113 13:33:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:50.113 13:33:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:50.113 13:33:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.113 13:33:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:50.113 13:33:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.113 13:33:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:50.113 "name": "Existed_Raid", 00:13:50.113 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:50.113 "strip_size_kb": 64, 00:13:50.113 "state": "configuring", 00:13:50.113 "raid_level": "raid0", 00:13:50.113 "superblock": false, 00:13:50.113 "num_base_bdevs": 3, 00:13:50.113 "num_base_bdevs_discovered": 1, 00:13:50.113 "num_base_bdevs_operational": 3, 00:13:50.113 "base_bdevs_list": [ 00:13:50.113 { 00:13:50.113 "name": null, 00:13:50.113 "uuid": "76eab5d3-5371-4640-8d6b-946469f9a9af", 00:13:50.113 "is_configured": false, 00:13:50.113 "data_offset": 0, 00:13:50.113 "data_size": 65536 00:13:50.113 }, 00:13:50.113 { 00:13:50.113 "name": null, 00:13:50.113 "uuid": "6f63f88c-71a9-4c2f-93a9-1c98a40493fb", 00:13:50.113 "is_configured": false, 00:13:50.113 "data_offset": 0, 00:13:50.113 "data_size": 65536 00:13:50.113 }, 00:13:50.113 { 00:13:50.113 "name": "BaseBdev3", 00:13:50.113 "uuid": "64c1206b-8a50-4e3a-aaa3-b035d5a9ce14", 00:13:50.113 "is_configured": true, 00:13:50.113 "data_offset": 0, 00:13:50.113 "data_size": 65536 00:13:50.113 } 00:13:50.113 ] 00:13:50.113 }' 00:13:50.113 13:33:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:50.113 13:33:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:50.373 13:33:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:50.373 13:33:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:50.373 13:33:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.373 13:33:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:50.373 13:33:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.373 13:33:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:13:50.373 13:33:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:13:50.373 13:33:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.373 13:33:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:50.373 [2024-11-20 13:33:49.846201] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:50.373 13:33:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.373 13:33:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:13:50.373 13:33:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:50.373 13:33:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:50.373 13:33:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:50.373 13:33:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:50.373 13:33:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:50.373 13:33:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:50.373 13:33:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:50.373 13:33:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:50.373 13:33:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:50.373 13:33:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:50.373 13:33:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:50.632 13:33:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.632 13:33:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:50.632 13:33:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.632 13:33:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:50.632 "name": "Existed_Raid", 00:13:50.632 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:50.632 "strip_size_kb": 64, 00:13:50.632 "state": "configuring", 00:13:50.632 "raid_level": "raid0", 00:13:50.632 "superblock": false, 00:13:50.632 "num_base_bdevs": 3, 00:13:50.632 "num_base_bdevs_discovered": 2, 00:13:50.632 "num_base_bdevs_operational": 3, 00:13:50.632 "base_bdevs_list": [ 00:13:50.632 { 00:13:50.632 "name": null, 00:13:50.632 "uuid": "76eab5d3-5371-4640-8d6b-946469f9a9af", 00:13:50.632 "is_configured": false, 00:13:50.632 "data_offset": 0, 00:13:50.632 "data_size": 65536 00:13:50.632 }, 00:13:50.632 { 00:13:50.632 "name": "BaseBdev2", 00:13:50.632 "uuid": "6f63f88c-71a9-4c2f-93a9-1c98a40493fb", 00:13:50.632 "is_configured": true, 00:13:50.632 "data_offset": 0, 00:13:50.632 "data_size": 65536 00:13:50.632 }, 00:13:50.632 { 00:13:50.632 "name": "BaseBdev3", 00:13:50.632 "uuid": "64c1206b-8a50-4e3a-aaa3-b035d5a9ce14", 00:13:50.632 "is_configured": true, 00:13:50.632 "data_offset": 0, 00:13:50.632 "data_size": 65536 00:13:50.632 } 00:13:50.632 ] 00:13:50.632 }' 00:13:50.632 13:33:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:50.632 13:33:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:50.944 13:33:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:50.944 13:33:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:50.944 13:33:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.944 13:33:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:50.944 13:33:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.944 13:33:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:13:50.944 13:33:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:50.944 13:33:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.944 13:33:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:50.944 13:33:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:13:50.944 13:33:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.944 13:33:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 76eab5d3-5371-4640-8d6b-946469f9a9af 00:13:50.944 13:33:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.944 13:33:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:51.219 [2024-11-20 13:33:50.424231] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:13:51.219 [2024-11-20 13:33:50.424270] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:13:51.219 [2024-11-20 13:33:50.424282] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:13:51.219 [2024-11-20 13:33:50.424539] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:13:51.219 [2024-11-20 13:33:50.424679] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:13:51.219 [2024-11-20 13:33:50.424689] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:13:51.219 [2024-11-20 13:33:50.424941] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:51.219 NewBaseBdev 00:13:51.219 13:33:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.219 13:33:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:13:51.219 13:33:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:13:51.219 13:33:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:51.219 13:33:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:51.219 13:33:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:51.220 13:33:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:51.220 13:33:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:51.220 13:33:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.220 13:33:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:51.220 13:33:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.220 13:33:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:13:51.220 13:33:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.220 13:33:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:51.220 [ 00:13:51.220 { 00:13:51.220 "name": "NewBaseBdev", 00:13:51.220 "aliases": [ 00:13:51.220 "76eab5d3-5371-4640-8d6b-946469f9a9af" 00:13:51.220 ], 00:13:51.220 "product_name": "Malloc disk", 00:13:51.220 "block_size": 512, 00:13:51.220 "num_blocks": 65536, 00:13:51.220 "uuid": "76eab5d3-5371-4640-8d6b-946469f9a9af", 00:13:51.220 "assigned_rate_limits": { 00:13:51.220 "rw_ios_per_sec": 0, 00:13:51.220 "rw_mbytes_per_sec": 0, 00:13:51.220 "r_mbytes_per_sec": 0, 00:13:51.220 "w_mbytes_per_sec": 0 00:13:51.220 }, 00:13:51.220 "claimed": true, 00:13:51.220 "claim_type": "exclusive_write", 00:13:51.220 "zoned": false, 00:13:51.220 "supported_io_types": { 00:13:51.220 "read": true, 00:13:51.220 "write": true, 00:13:51.220 "unmap": true, 00:13:51.220 "flush": true, 00:13:51.220 "reset": true, 00:13:51.220 "nvme_admin": false, 00:13:51.220 "nvme_io": false, 00:13:51.220 "nvme_io_md": false, 00:13:51.220 "write_zeroes": true, 00:13:51.220 "zcopy": true, 00:13:51.220 "get_zone_info": false, 00:13:51.220 "zone_management": false, 00:13:51.220 "zone_append": false, 00:13:51.220 "compare": false, 00:13:51.220 "compare_and_write": false, 00:13:51.220 "abort": true, 00:13:51.220 "seek_hole": false, 00:13:51.220 "seek_data": false, 00:13:51.220 "copy": true, 00:13:51.220 "nvme_iov_md": false 00:13:51.220 }, 00:13:51.220 "memory_domains": [ 00:13:51.220 { 00:13:51.220 "dma_device_id": "system", 00:13:51.220 "dma_device_type": 1 00:13:51.220 }, 00:13:51.220 { 00:13:51.220 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:51.220 "dma_device_type": 2 00:13:51.220 } 00:13:51.220 ], 00:13:51.220 "driver_specific": {} 00:13:51.220 } 00:13:51.220 ] 00:13:51.220 13:33:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.220 13:33:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:51.220 13:33:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:13:51.220 13:33:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:51.220 13:33:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:51.220 13:33:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:51.220 13:33:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:51.220 13:33:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:51.220 13:33:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:51.220 13:33:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:51.220 13:33:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:51.220 13:33:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:51.220 13:33:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:51.220 13:33:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:51.220 13:33:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.220 13:33:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:51.220 13:33:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.220 13:33:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:51.220 "name": "Existed_Raid", 00:13:51.220 "uuid": "412e7e69-065f-4df0-9298-8b485cd79a72", 00:13:51.220 "strip_size_kb": 64, 00:13:51.220 "state": "online", 00:13:51.220 "raid_level": "raid0", 00:13:51.220 "superblock": false, 00:13:51.220 "num_base_bdevs": 3, 00:13:51.220 "num_base_bdevs_discovered": 3, 00:13:51.220 "num_base_bdevs_operational": 3, 00:13:51.220 "base_bdevs_list": [ 00:13:51.220 { 00:13:51.220 "name": "NewBaseBdev", 00:13:51.220 "uuid": "76eab5d3-5371-4640-8d6b-946469f9a9af", 00:13:51.220 "is_configured": true, 00:13:51.220 "data_offset": 0, 00:13:51.220 "data_size": 65536 00:13:51.220 }, 00:13:51.220 { 00:13:51.220 "name": "BaseBdev2", 00:13:51.220 "uuid": "6f63f88c-71a9-4c2f-93a9-1c98a40493fb", 00:13:51.220 "is_configured": true, 00:13:51.220 "data_offset": 0, 00:13:51.220 "data_size": 65536 00:13:51.220 }, 00:13:51.220 { 00:13:51.220 "name": "BaseBdev3", 00:13:51.220 "uuid": "64c1206b-8a50-4e3a-aaa3-b035d5a9ce14", 00:13:51.220 "is_configured": true, 00:13:51.220 "data_offset": 0, 00:13:51.220 "data_size": 65536 00:13:51.220 } 00:13:51.220 ] 00:13:51.220 }' 00:13:51.220 13:33:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:51.220 13:33:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:51.480 13:33:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:13:51.480 13:33:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:51.480 13:33:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:51.480 13:33:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:51.480 13:33:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:51.480 13:33:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:51.480 13:33:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:51.480 13:33:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.480 13:33:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:51.480 13:33:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:51.480 [2024-11-20 13:33:50.924152] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:51.480 13:33:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.480 13:33:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:51.480 "name": "Existed_Raid", 00:13:51.480 "aliases": [ 00:13:51.480 "412e7e69-065f-4df0-9298-8b485cd79a72" 00:13:51.480 ], 00:13:51.480 "product_name": "Raid Volume", 00:13:51.480 "block_size": 512, 00:13:51.480 "num_blocks": 196608, 00:13:51.480 "uuid": "412e7e69-065f-4df0-9298-8b485cd79a72", 00:13:51.480 "assigned_rate_limits": { 00:13:51.480 "rw_ios_per_sec": 0, 00:13:51.480 "rw_mbytes_per_sec": 0, 00:13:51.480 "r_mbytes_per_sec": 0, 00:13:51.480 "w_mbytes_per_sec": 0 00:13:51.480 }, 00:13:51.480 "claimed": false, 00:13:51.480 "zoned": false, 00:13:51.480 "supported_io_types": { 00:13:51.480 "read": true, 00:13:51.480 "write": true, 00:13:51.480 "unmap": true, 00:13:51.480 "flush": true, 00:13:51.480 "reset": true, 00:13:51.480 "nvme_admin": false, 00:13:51.480 "nvme_io": false, 00:13:51.480 "nvme_io_md": false, 00:13:51.480 "write_zeroes": true, 00:13:51.480 "zcopy": false, 00:13:51.480 "get_zone_info": false, 00:13:51.480 "zone_management": false, 00:13:51.480 "zone_append": false, 00:13:51.480 "compare": false, 00:13:51.480 "compare_and_write": false, 00:13:51.480 "abort": false, 00:13:51.480 "seek_hole": false, 00:13:51.480 "seek_data": false, 00:13:51.480 "copy": false, 00:13:51.480 "nvme_iov_md": false 00:13:51.480 }, 00:13:51.480 "memory_domains": [ 00:13:51.480 { 00:13:51.480 "dma_device_id": "system", 00:13:51.480 "dma_device_type": 1 00:13:51.480 }, 00:13:51.480 { 00:13:51.480 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:51.480 "dma_device_type": 2 00:13:51.480 }, 00:13:51.480 { 00:13:51.480 "dma_device_id": "system", 00:13:51.480 "dma_device_type": 1 00:13:51.480 }, 00:13:51.480 { 00:13:51.480 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:51.480 "dma_device_type": 2 00:13:51.480 }, 00:13:51.480 { 00:13:51.480 "dma_device_id": "system", 00:13:51.480 "dma_device_type": 1 00:13:51.480 }, 00:13:51.480 { 00:13:51.480 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:51.480 "dma_device_type": 2 00:13:51.480 } 00:13:51.480 ], 00:13:51.480 "driver_specific": { 00:13:51.480 "raid": { 00:13:51.480 "uuid": "412e7e69-065f-4df0-9298-8b485cd79a72", 00:13:51.480 "strip_size_kb": 64, 00:13:51.480 "state": "online", 00:13:51.480 "raid_level": "raid0", 00:13:51.480 "superblock": false, 00:13:51.480 "num_base_bdevs": 3, 00:13:51.480 "num_base_bdevs_discovered": 3, 00:13:51.480 "num_base_bdevs_operational": 3, 00:13:51.480 "base_bdevs_list": [ 00:13:51.480 { 00:13:51.480 "name": "NewBaseBdev", 00:13:51.480 "uuid": "76eab5d3-5371-4640-8d6b-946469f9a9af", 00:13:51.480 "is_configured": true, 00:13:51.480 "data_offset": 0, 00:13:51.480 "data_size": 65536 00:13:51.480 }, 00:13:51.480 { 00:13:51.480 "name": "BaseBdev2", 00:13:51.480 "uuid": "6f63f88c-71a9-4c2f-93a9-1c98a40493fb", 00:13:51.481 "is_configured": true, 00:13:51.481 "data_offset": 0, 00:13:51.481 "data_size": 65536 00:13:51.481 }, 00:13:51.481 { 00:13:51.481 "name": "BaseBdev3", 00:13:51.481 "uuid": "64c1206b-8a50-4e3a-aaa3-b035d5a9ce14", 00:13:51.481 "is_configured": true, 00:13:51.481 "data_offset": 0, 00:13:51.481 "data_size": 65536 00:13:51.481 } 00:13:51.481 ] 00:13:51.481 } 00:13:51.481 } 00:13:51.481 }' 00:13:51.481 13:33:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:51.740 13:33:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:13:51.740 BaseBdev2 00:13:51.740 BaseBdev3' 00:13:51.740 13:33:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:51.740 13:33:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:51.740 13:33:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:51.740 13:33:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:13:51.740 13:33:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.740 13:33:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:51.740 13:33:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:51.740 13:33:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.740 13:33:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:51.740 13:33:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:51.740 13:33:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:51.740 13:33:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:51.740 13:33:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:51.740 13:33:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.740 13:33:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:51.741 13:33:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.741 13:33:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:51.741 13:33:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:51.741 13:33:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:51.741 13:33:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:13:51.741 13:33:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.741 13:33:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:51.741 13:33:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:51.741 13:33:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.741 13:33:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:51.741 13:33:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:51.741 13:33:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:51.741 13:33:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.741 13:33:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:51.741 [2024-11-20 13:33:51.167499] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:51.741 [2024-11-20 13:33:51.167528] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:51.741 [2024-11-20 13:33:51.167607] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:51.741 [2024-11-20 13:33:51.167658] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:51.741 [2024-11-20 13:33:51.167673] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:13:51.741 13:33:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.741 13:33:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 63588 00:13:51.741 13:33:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 63588 ']' 00:13:51.741 13:33:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 63588 00:13:51.741 13:33:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:13:51.741 13:33:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:51.741 13:33:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63588 00:13:51.741 killing process with pid 63588 00:13:51.741 13:33:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:51.741 13:33:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:51.741 13:33:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63588' 00:13:51.741 13:33:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 63588 00:13:51.741 [2024-11-20 13:33:51.210901] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:51.741 13:33:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 63588 00:13:52.310 [2024-11-20 13:33:51.520510] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:53.247 13:33:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:13:53.247 00:13:53.247 real 0m10.345s 00:13:53.247 user 0m16.387s 00:13:53.247 sys 0m1.934s 00:13:53.247 13:33:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:53.247 13:33:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.247 ************************************ 00:13:53.247 END TEST raid_state_function_test 00:13:53.247 ************************************ 00:13:53.506 13:33:52 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 3 true 00:13:53.506 13:33:52 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:13:53.506 13:33:52 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:53.506 13:33:52 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:53.506 ************************************ 00:13:53.506 START TEST raid_state_function_test_sb 00:13:53.506 ************************************ 00:13:53.506 13:33:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 3 true 00:13:53.506 13:33:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:13:53.506 13:33:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:13:53.506 13:33:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:13:53.506 13:33:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:13:53.506 13:33:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:13:53.506 13:33:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:53.506 13:33:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:13:53.506 13:33:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:53.506 13:33:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:53.506 13:33:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:13:53.506 13:33:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:53.506 13:33:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:53.506 13:33:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:13:53.506 13:33:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:53.506 13:33:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:53.506 13:33:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:13:53.506 13:33:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:13:53.506 13:33:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:13:53.506 13:33:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:13:53.506 13:33:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:13:53.506 13:33:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:13:53.506 13:33:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:13:53.506 13:33:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:13:53.506 13:33:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:13:53.506 13:33:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:13:53.506 13:33:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:13:53.506 13:33:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:13:53.507 13:33:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=64204 00:13:53.507 Process raid pid: 64204 00:13:53.507 13:33:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 64204' 00:13:53.507 13:33:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 64204 00:13:53.507 13:33:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 64204 ']' 00:13:53.507 13:33:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:53.507 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:53.507 13:33:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:53.507 13:33:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:53.507 13:33:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:53.507 13:33:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:53.507 [2024-11-20 13:33:52.848780] Starting SPDK v25.01-pre git sha1 82b85d9ca / DPDK 24.03.0 initialization... 00:13:53.507 [2024-11-20 13:33:52.849236] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:53.766 [2024-11-20 13:33:53.024883] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:53.766 [2024-11-20 13:33:53.143237] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:54.026 [2024-11-20 13:33:53.361989] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:54.026 [2024-11-20 13:33:53.362221] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:54.286 13:33:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:54.286 13:33:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:13:54.286 13:33:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:13:54.286 13:33:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.286 13:33:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:54.286 [2024-11-20 13:33:53.679251] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:54.286 [2024-11-20 13:33:53.679310] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:54.286 [2024-11-20 13:33:53.679322] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:54.286 [2024-11-20 13:33:53.679335] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:54.286 [2024-11-20 13:33:53.679343] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:54.286 [2024-11-20 13:33:53.679355] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:54.286 13:33:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.286 13:33:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:13:54.286 13:33:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:54.286 13:33:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:54.286 13:33:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:54.286 13:33:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:54.286 13:33:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:54.286 13:33:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:54.286 13:33:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:54.286 13:33:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:54.286 13:33:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:54.286 13:33:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:54.286 13:33:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:54.286 13:33:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.286 13:33:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:54.286 13:33:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.286 13:33:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:54.286 "name": "Existed_Raid", 00:13:54.286 "uuid": "c9935a43-8976-4eea-b618-2496223828c4", 00:13:54.286 "strip_size_kb": 64, 00:13:54.286 "state": "configuring", 00:13:54.286 "raid_level": "raid0", 00:13:54.286 "superblock": true, 00:13:54.286 "num_base_bdevs": 3, 00:13:54.286 "num_base_bdevs_discovered": 0, 00:13:54.286 "num_base_bdevs_operational": 3, 00:13:54.286 "base_bdevs_list": [ 00:13:54.286 { 00:13:54.286 "name": "BaseBdev1", 00:13:54.286 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:54.286 "is_configured": false, 00:13:54.286 "data_offset": 0, 00:13:54.286 "data_size": 0 00:13:54.286 }, 00:13:54.286 { 00:13:54.286 "name": "BaseBdev2", 00:13:54.286 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:54.286 "is_configured": false, 00:13:54.286 "data_offset": 0, 00:13:54.286 "data_size": 0 00:13:54.286 }, 00:13:54.286 { 00:13:54.286 "name": "BaseBdev3", 00:13:54.286 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:54.286 "is_configured": false, 00:13:54.286 "data_offset": 0, 00:13:54.286 "data_size": 0 00:13:54.286 } 00:13:54.286 ] 00:13:54.286 }' 00:13:54.286 13:33:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:54.286 13:33:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:54.855 13:33:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:54.855 13:33:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.855 13:33:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:54.855 [2024-11-20 13:33:54.102737] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:54.855 [2024-11-20 13:33:54.102896] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:13:54.855 13:33:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.855 13:33:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:13:54.855 13:33:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.855 13:33:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:54.855 [2024-11-20 13:33:54.114730] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:54.855 [2024-11-20 13:33:54.114781] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:54.855 [2024-11-20 13:33:54.114792] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:54.855 [2024-11-20 13:33:54.114804] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:54.856 [2024-11-20 13:33:54.114812] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:54.856 [2024-11-20 13:33:54.114824] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:54.856 13:33:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.856 13:33:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:54.856 13:33:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.856 13:33:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:54.856 [2024-11-20 13:33:54.157591] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:54.856 BaseBdev1 00:13:54.856 13:33:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.856 13:33:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:13:54.856 13:33:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:13:54.856 13:33:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:54.856 13:33:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:54.856 13:33:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:54.856 13:33:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:54.856 13:33:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:54.856 13:33:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.856 13:33:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:54.856 13:33:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.856 13:33:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:54.856 13:33:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.856 13:33:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:54.856 [ 00:13:54.856 { 00:13:54.856 "name": "BaseBdev1", 00:13:54.856 "aliases": [ 00:13:54.856 "f6804989-8467-4025-a8eb-3652f4e55b1c" 00:13:54.856 ], 00:13:54.856 "product_name": "Malloc disk", 00:13:54.856 "block_size": 512, 00:13:54.856 "num_blocks": 65536, 00:13:54.856 "uuid": "f6804989-8467-4025-a8eb-3652f4e55b1c", 00:13:54.856 "assigned_rate_limits": { 00:13:54.856 "rw_ios_per_sec": 0, 00:13:54.856 "rw_mbytes_per_sec": 0, 00:13:54.856 "r_mbytes_per_sec": 0, 00:13:54.856 "w_mbytes_per_sec": 0 00:13:54.856 }, 00:13:54.856 "claimed": true, 00:13:54.856 "claim_type": "exclusive_write", 00:13:54.856 "zoned": false, 00:13:54.856 "supported_io_types": { 00:13:54.856 "read": true, 00:13:54.856 "write": true, 00:13:54.856 "unmap": true, 00:13:54.856 "flush": true, 00:13:54.856 "reset": true, 00:13:54.856 "nvme_admin": false, 00:13:54.856 "nvme_io": false, 00:13:54.856 "nvme_io_md": false, 00:13:54.856 "write_zeroes": true, 00:13:54.856 "zcopy": true, 00:13:54.856 "get_zone_info": false, 00:13:54.856 "zone_management": false, 00:13:54.856 "zone_append": false, 00:13:54.856 "compare": false, 00:13:54.856 "compare_and_write": false, 00:13:54.856 "abort": true, 00:13:54.856 "seek_hole": false, 00:13:54.856 "seek_data": false, 00:13:54.856 "copy": true, 00:13:54.856 "nvme_iov_md": false 00:13:54.856 }, 00:13:54.856 "memory_domains": [ 00:13:54.856 { 00:13:54.856 "dma_device_id": "system", 00:13:54.856 "dma_device_type": 1 00:13:54.856 }, 00:13:54.856 { 00:13:54.856 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:54.856 "dma_device_type": 2 00:13:54.856 } 00:13:54.856 ], 00:13:54.856 "driver_specific": {} 00:13:54.856 } 00:13:54.856 ] 00:13:54.856 13:33:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.856 13:33:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:54.856 13:33:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:13:54.856 13:33:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:54.856 13:33:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:54.856 13:33:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:54.856 13:33:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:54.856 13:33:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:54.856 13:33:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:54.856 13:33:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:54.856 13:33:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:54.856 13:33:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:54.856 13:33:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:54.856 13:33:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:54.856 13:33:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.856 13:33:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:54.856 13:33:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.856 13:33:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:54.856 "name": "Existed_Raid", 00:13:54.856 "uuid": "a55a8518-95a5-485a-b4fc-73bcacdf30ea", 00:13:54.856 "strip_size_kb": 64, 00:13:54.856 "state": "configuring", 00:13:54.856 "raid_level": "raid0", 00:13:54.856 "superblock": true, 00:13:54.856 "num_base_bdevs": 3, 00:13:54.856 "num_base_bdevs_discovered": 1, 00:13:54.856 "num_base_bdevs_operational": 3, 00:13:54.856 "base_bdevs_list": [ 00:13:54.856 { 00:13:54.856 "name": "BaseBdev1", 00:13:54.856 "uuid": "f6804989-8467-4025-a8eb-3652f4e55b1c", 00:13:54.856 "is_configured": true, 00:13:54.856 "data_offset": 2048, 00:13:54.856 "data_size": 63488 00:13:54.856 }, 00:13:54.856 { 00:13:54.856 "name": "BaseBdev2", 00:13:54.856 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:54.856 "is_configured": false, 00:13:54.856 "data_offset": 0, 00:13:54.856 "data_size": 0 00:13:54.856 }, 00:13:54.856 { 00:13:54.856 "name": "BaseBdev3", 00:13:54.856 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:54.856 "is_configured": false, 00:13:54.856 "data_offset": 0, 00:13:54.856 "data_size": 0 00:13:54.856 } 00:13:54.856 ] 00:13:54.856 }' 00:13:54.856 13:33:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:54.856 13:33:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:55.116 13:33:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:55.116 13:33:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.116 13:33:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:55.116 [2024-11-20 13:33:54.589198] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:55.116 [2024-11-20 13:33:54.589252] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:13:55.116 13:33:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.116 13:33:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:13:55.116 13:33:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.116 13:33:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:55.375 [2024-11-20 13:33:54.601259] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:55.375 [2024-11-20 13:33:54.603579] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:55.375 [2024-11-20 13:33:54.603630] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:55.375 [2024-11-20 13:33:54.603642] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:55.375 [2024-11-20 13:33:54.603656] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:55.375 13:33:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.375 13:33:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:13:55.375 13:33:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:55.375 13:33:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:13:55.375 13:33:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:55.375 13:33:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:55.375 13:33:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:55.375 13:33:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:55.375 13:33:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:55.375 13:33:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:55.375 13:33:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:55.375 13:33:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:55.375 13:33:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:55.375 13:33:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:55.375 13:33:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.375 13:33:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:55.376 13:33:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:55.376 13:33:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.376 13:33:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:55.376 "name": "Existed_Raid", 00:13:55.376 "uuid": "2e82520e-c585-4e96-90f5-d9b3b2b6fdd6", 00:13:55.376 "strip_size_kb": 64, 00:13:55.376 "state": "configuring", 00:13:55.376 "raid_level": "raid0", 00:13:55.376 "superblock": true, 00:13:55.376 "num_base_bdevs": 3, 00:13:55.376 "num_base_bdevs_discovered": 1, 00:13:55.376 "num_base_bdevs_operational": 3, 00:13:55.376 "base_bdevs_list": [ 00:13:55.376 { 00:13:55.376 "name": "BaseBdev1", 00:13:55.376 "uuid": "f6804989-8467-4025-a8eb-3652f4e55b1c", 00:13:55.376 "is_configured": true, 00:13:55.376 "data_offset": 2048, 00:13:55.376 "data_size": 63488 00:13:55.376 }, 00:13:55.376 { 00:13:55.376 "name": "BaseBdev2", 00:13:55.376 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:55.376 "is_configured": false, 00:13:55.376 "data_offset": 0, 00:13:55.376 "data_size": 0 00:13:55.376 }, 00:13:55.376 { 00:13:55.376 "name": "BaseBdev3", 00:13:55.376 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:55.376 "is_configured": false, 00:13:55.376 "data_offset": 0, 00:13:55.376 "data_size": 0 00:13:55.376 } 00:13:55.376 ] 00:13:55.376 }' 00:13:55.376 13:33:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:55.376 13:33:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:55.635 13:33:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:55.635 13:33:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.635 13:33:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:55.635 [2024-11-20 13:33:55.078978] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:55.635 BaseBdev2 00:13:55.635 13:33:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.635 13:33:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:13:55.635 13:33:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:13:55.635 13:33:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:55.635 13:33:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:55.635 13:33:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:55.635 13:33:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:55.635 13:33:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:55.635 13:33:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.635 13:33:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:55.635 13:33:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.635 13:33:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:55.635 13:33:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.635 13:33:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:55.635 [ 00:13:55.635 { 00:13:55.635 "name": "BaseBdev2", 00:13:55.635 "aliases": [ 00:13:55.635 "aa5a8755-b297-4c1f-b617-744e60a75514" 00:13:55.635 ], 00:13:55.635 "product_name": "Malloc disk", 00:13:55.635 "block_size": 512, 00:13:55.635 "num_blocks": 65536, 00:13:55.635 "uuid": "aa5a8755-b297-4c1f-b617-744e60a75514", 00:13:55.635 "assigned_rate_limits": { 00:13:55.635 "rw_ios_per_sec": 0, 00:13:55.635 "rw_mbytes_per_sec": 0, 00:13:55.635 "r_mbytes_per_sec": 0, 00:13:55.635 "w_mbytes_per_sec": 0 00:13:55.635 }, 00:13:55.635 "claimed": true, 00:13:55.635 "claim_type": "exclusive_write", 00:13:55.635 "zoned": false, 00:13:55.635 "supported_io_types": { 00:13:55.635 "read": true, 00:13:55.635 "write": true, 00:13:55.635 "unmap": true, 00:13:55.635 "flush": true, 00:13:55.635 "reset": true, 00:13:55.635 "nvme_admin": false, 00:13:55.635 "nvme_io": false, 00:13:55.635 "nvme_io_md": false, 00:13:55.635 "write_zeroes": true, 00:13:55.895 "zcopy": true, 00:13:55.895 "get_zone_info": false, 00:13:55.895 "zone_management": false, 00:13:55.895 "zone_append": false, 00:13:55.895 "compare": false, 00:13:55.895 "compare_and_write": false, 00:13:55.895 "abort": true, 00:13:55.895 "seek_hole": false, 00:13:55.895 "seek_data": false, 00:13:55.895 "copy": true, 00:13:55.895 "nvme_iov_md": false 00:13:55.895 }, 00:13:55.895 "memory_domains": [ 00:13:55.895 { 00:13:55.895 "dma_device_id": "system", 00:13:55.895 "dma_device_type": 1 00:13:55.895 }, 00:13:55.895 { 00:13:55.895 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:55.895 "dma_device_type": 2 00:13:55.895 } 00:13:55.895 ], 00:13:55.895 "driver_specific": {} 00:13:55.895 } 00:13:55.895 ] 00:13:55.895 13:33:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.895 13:33:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:55.895 13:33:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:55.895 13:33:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:55.895 13:33:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:13:55.895 13:33:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:55.895 13:33:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:55.895 13:33:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:55.895 13:33:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:55.895 13:33:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:55.895 13:33:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:55.895 13:33:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:55.895 13:33:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:55.895 13:33:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:55.895 13:33:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:55.895 13:33:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:55.896 13:33:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.896 13:33:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:55.896 13:33:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.896 13:33:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:55.896 "name": "Existed_Raid", 00:13:55.896 "uuid": "2e82520e-c585-4e96-90f5-d9b3b2b6fdd6", 00:13:55.896 "strip_size_kb": 64, 00:13:55.896 "state": "configuring", 00:13:55.896 "raid_level": "raid0", 00:13:55.896 "superblock": true, 00:13:55.896 "num_base_bdevs": 3, 00:13:55.896 "num_base_bdevs_discovered": 2, 00:13:55.896 "num_base_bdevs_operational": 3, 00:13:55.896 "base_bdevs_list": [ 00:13:55.896 { 00:13:55.896 "name": "BaseBdev1", 00:13:55.896 "uuid": "f6804989-8467-4025-a8eb-3652f4e55b1c", 00:13:55.896 "is_configured": true, 00:13:55.896 "data_offset": 2048, 00:13:55.896 "data_size": 63488 00:13:55.896 }, 00:13:55.896 { 00:13:55.896 "name": "BaseBdev2", 00:13:55.896 "uuid": "aa5a8755-b297-4c1f-b617-744e60a75514", 00:13:55.896 "is_configured": true, 00:13:55.896 "data_offset": 2048, 00:13:55.896 "data_size": 63488 00:13:55.896 }, 00:13:55.896 { 00:13:55.896 "name": "BaseBdev3", 00:13:55.896 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:55.896 "is_configured": false, 00:13:55.896 "data_offset": 0, 00:13:55.896 "data_size": 0 00:13:55.896 } 00:13:55.896 ] 00:13:55.896 }' 00:13:55.896 13:33:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:55.896 13:33:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:56.156 13:33:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:13:56.156 13:33:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.156 13:33:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:56.156 [2024-11-20 13:33:55.548792] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:56.156 [2024-11-20 13:33:55.549260] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:13:56.156 [2024-11-20 13:33:55.549387] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:13:56.156 [2024-11-20 13:33:55.549705] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:13:56.156 BaseBdev3 00:13:56.156 [2024-11-20 13:33:55.549891] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:13:56.156 [2024-11-20 13:33:55.550050] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:13:56.156 [2024-11-20 13:33:55.550286] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:56.156 13:33:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.156 13:33:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:13:56.156 13:33:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:13:56.156 13:33:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:56.156 13:33:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:56.156 13:33:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:56.156 13:33:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:56.156 13:33:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:56.156 13:33:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.156 13:33:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:56.156 13:33:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.156 13:33:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:56.156 13:33:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.156 13:33:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:56.156 [ 00:13:56.156 { 00:13:56.156 "name": "BaseBdev3", 00:13:56.156 "aliases": [ 00:13:56.156 "ad6f0c16-2009-4dec-9f48-25521d6a5571" 00:13:56.156 ], 00:13:56.156 "product_name": "Malloc disk", 00:13:56.156 "block_size": 512, 00:13:56.156 "num_blocks": 65536, 00:13:56.156 "uuid": "ad6f0c16-2009-4dec-9f48-25521d6a5571", 00:13:56.156 "assigned_rate_limits": { 00:13:56.156 "rw_ios_per_sec": 0, 00:13:56.156 "rw_mbytes_per_sec": 0, 00:13:56.156 "r_mbytes_per_sec": 0, 00:13:56.156 "w_mbytes_per_sec": 0 00:13:56.156 }, 00:13:56.156 "claimed": true, 00:13:56.156 "claim_type": "exclusive_write", 00:13:56.156 "zoned": false, 00:13:56.156 "supported_io_types": { 00:13:56.156 "read": true, 00:13:56.156 "write": true, 00:13:56.156 "unmap": true, 00:13:56.156 "flush": true, 00:13:56.156 "reset": true, 00:13:56.156 "nvme_admin": false, 00:13:56.156 "nvme_io": false, 00:13:56.156 "nvme_io_md": false, 00:13:56.156 "write_zeroes": true, 00:13:56.156 "zcopy": true, 00:13:56.156 "get_zone_info": false, 00:13:56.156 "zone_management": false, 00:13:56.156 "zone_append": false, 00:13:56.156 "compare": false, 00:13:56.156 "compare_and_write": false, 00:13:56.156 "abort": true, 00:13:56.156 "seek_hole": false, 00:13:56.156 "seek_data": false, 00:13:56.156 "copy": true, 00:13:56.156 "nvme_iov_md": false 00:13:56.156 }, 00:13:56.156 "memory_domains": [ 00:13:56.156 { 00:13:56.156 "dma_device_id": "system", 00:13:56.156 "dma_device_type": 1 00:13:56.156 }, 00:13:56.156 { 00:13:56.156 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:56.156 "dma_device_type": 2 00:13:56.156 } 00:13:56.156 ], 00:13:56.156 "driver_specific": {} 00:13:56.156 } 00:13:56.156 ] 00:13:56.156 13:33:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.156 13:33:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:56.156 13:33:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:56.156 13:33:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:56.156 13:33:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:13:56.156 13:33:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:56.156 13:33:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:56.156 13:33:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:56.156 13:33:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:56.157 13:33:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:56.157 13:33:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:56.157 13:33:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:56.157 13:33:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:56.157 13:33:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:56.157 13:33:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:56.157 13:33:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.157 13:33:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:56.157 13:33:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:56.157 13:33:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.417 13:33:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:56.417 "name": "Existed_Raid", 00:13:56.417 "uuid": "2e82520e-c585-4e96-90f5-d9b3b2b6fdd6", 00:13:56.417 "strip_size_kb": 64, 00:13:56.417 "state": "online", 00:13:56.417 "raid_level": "raid0", 00:13:56.417 "superblock": true, 00:13:56.417 "num_base_bdevs": 3, 00:13:56.417 "num_base_bdevs_discovered": 3, 00:13:56.417 "num_base_bdevs_operational": 3, 00:13:56.417 "base_bdevs_list": [ 00:13:56.417 { 00:13:56.417 "name": "BaseBdev1", 00:13:56.417 "uuid": "f6804989-8467-4025-a8eb-3652f4e55b1c", 00:13:56.417 "is_configured": true, 00:13:56.417 "data_offset": 2048, 00:13:56.417 "data_size": 63488 00:13:56.417 }, 00:13:56.417 { 00:13:56.417 "name": "BaseBdev2", 00:13:56.417 "uuid": "aa5a8755-b297-4c1f-b617-744e60a75514", 00:13:56.417 "is_configured": true, 00:13:56.417 "data_offset": 2048, 00:13:56.417 "data_size": 63488 00:13:56.417 }, 00:13:56.417 { 00:13:56.417 "name": "BaseBdev3", 00:13:56.417 "uuid": "ad6f0c16-2009-4dec-9f48-25521d6a5571", 00:13:56.417 "is_configured": true, 00:13:56.417 "data_offset": 2048, 00:13:56.417 "data_size": 63488 00:13:56.417 } 00:13:56.417 ] 00:13:56.417 }' 00:13:56.417 13:33:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:56.417 13:33:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:56.677 13:33:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:13:56.677 13:33:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:56.677 13:33:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:56.677 13:33:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:56.677 13:33:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:13:56.677 13:33:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:56.677 13:33:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:56.677 13:33:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:56.677 13:33:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.677 13:33:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:56.677 [2024-11-20 13:33:55.960550] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:56.677 13:33:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.677 13:33:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:56.677 "name": "Existed_Raid", 00:13:56.677 "aliases": [ 00:13:56.677 "2e82520e-c585-4e96-90f5-d9b3b2b6fdd6" 00:13:56.677 ], 00:13:56.677 "product_name": "Raid Volume", 00:13:56.677 "block_size": 512, 00:13:56.677 "num_blocks": 190464, 00:13:56.677 "uuid": "2e82520e-c585-4e96-90f5-d9b3b2b6fdd6", 00:13:56.677 "assigned_rate_limits": { 00:13:56.677 "rw_ios_per_sec": 0, 00:13:56.677 "rw_mbytes_per_sec": 0, 00:13:56.677 "r_mbytes_per_sec": 0, 00:13:56.677 "w_mbytes_per_sec": 0 00:13:56.677 }, 00:13:56.677 "claimed": false, 00:13:56.677 "zoned": false, 00:13:56.677 "supported_io_types": { 00:13:56.677 "read": true, 00:13:56.677 "write": true, 00:13:56.677 "unmap": true, 00:13:56.677 "flush": true, 00:13:56.677 "reset": true, 00:13:56.677 "nvme_admin": false, 00:13:56.677 "nvme_io": false, 00:13:56.677 "nvme_io_md": false, 00:13:56.677 "write_zeroes": true, 00:13:56.677 "zcopy": false, 00:13:56.677 "get_zone_info": false, 00:13:56.677 "zone_management": false, 00:13:56.677 "zone_append": false, 00:13:56.677 "compare": false, 00:13:56.677 "compare_and_write": false, 00:13:56.677 "abort": false, 00:13:56.677 "seek_hole": false, 00:13:56.677 "seek_data": false, 00:13:56.677 "copy": false, 00:13:56.677 "nvme_iov_md": false 00:13:56.677 }, 00:13:56.677 "memory_domains": [ 00:13:56.677 { 00:13:56.677 "dma_device_id": "system", 00:13:56.677 "dma_device_type": 1 00:13:56.677 }, 00:13:56.677 { 00:13:56.677 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:56.677 "dma_device_type": 2 00:13:56.677 }, 00:13:56.677 { 00:13:56.677 "dma_device_id": "system", 00:13:56.677 "dma_device_type": 1 00:13:56.677 }, 00:13:56.677 { 00:13:56.677 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:56.677 "dma_device_type": 2 00:13:56.677 }, 00:13:56.677 { 00:13:56.677 "dma_device_id": "system", 00:13:56.677 "dma_device_type": 1 00:13:56.677 }, 00:13:56.677 { 00:13:56.677 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:56.677 "dma_device_type": 2 00:13:56.677 } 00:13:56.677 ], 00:13:56.677 "driver_specific": { 00:13:56.677 "raid": { 00:13:56.677 "uuid": "2e82520e-c585-4e96-90f5-d9b3b2b6fdd6", 00:13:56.677 "strip_size_kb": 64, 00:13:56.677 "state": "online", 00:13:56.677 "raid_level": "raid0", 00:13:56.677 "superblock": true, 00:13:56.677 "num_base_bdevs": 3, 00:13:56.677 "num_base_bdevs_discovered": 3, 00:13:56.677 "num_base_bdevs_operational": 3, 00:13:56.677 "base_bdevs_list": [ 00:13:56.677 { 00:13:56.677 "name": "BaseBdev1", 00:13:56.677 "uuid": "f6804989-8467-4025-a8eb-3652f4e55b1c", 00:13:56.677 "is_configured": true, 00:13:56.677 "data_offset": 2048, 00:13:56.677 "data_size": 63488 00:13:56.677 }, 00:13:56.677 { 00:13:56.677 "name": "BaseBdev2", 00:13:56.677 "uuid": "aa5a8755-b297-4c1f-b617-744e60a75514", 00:13:56.677 "is_configured": true, 00:13:56.677 "data_offset": 2048, 00:13:56.677 "data_size": 63488 00:13:56.677 }, 00:13:56.677 { 00:13:56.677 "name": "BaseBdev3", 00:13:56.677 "uuid": "ad6f0c16-2009-4dec-9f48-25521d6a5571", 00:13:56.677 "is_configured": true, 00:13:56.677 "data_offset": 2048, 00:13:56.677 "data_size": 63488 00:13:56.677 } 00:13:56.677 ] 00:13:56.677 } 00:13:56.677 } 00:13:56.677 }' 00:13:56.677 13:33:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:56.677 13:33:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:13:56.677 BaseBdev2 00:13:56.677 BaseBdev3' 00:13:56.677 13:33:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:56.677 13:33:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:56.677 13:33:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:56.677 13:33:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:13:56.677 13:33:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:56.677 13:33:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.677 13:33:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:56.677 13:33:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.677 13:33:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:56.677 13:33:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:56.677 13:33:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:56.678 13:33:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:56.678 13:33:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.678 13:33:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:56.678 13:33:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:56.678 13:33:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.989 13:33:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:56.989 13:33:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:56.989 13:33:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:56.989 13:33:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:56.989 13:33:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:13:56.989 13:33:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.989 13:33:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:56.989 13:33:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.989 13:33:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:56.989 13:33:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:56.989 13:33:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:56.989 13:33:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.989 13:33:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:56.989 [2024-11-20 13:33:56.232246] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:56.989 [2024-11-20 13:33:56.232274] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:56.989 [2024-11-20 13:33:56.232327] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:56.989 13:33:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.989 13:33:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:13:56.989 13:33:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:13:56.989 13:33:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:56.989 13:33:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:13:56.989 13:33:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:13:56.989 13:33:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:13:56.989 13:33:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:56.989 13:33:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:13:56.989 13:33:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:56.989 13:33:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:56.989 13:33:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:56.989 13:33:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:56.989 13:33:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:56.989 13:33:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:56.989 13:33:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:56.989 13:33:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:56.989 13:33:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:56.989 13:33:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.990 13:33:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:56.990 13:33:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.990 13:33:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:56.990 "name": "Existed_Raid", 00:13:56.990 "uuid": "2e82520e-c585-4e96-90f5-d9b3b2b6fdd6", 00:13:56.990 "strip_size_kb": 64, 00:13:56.990 "state": "offline", 00:13:56.990 "raid_level": "raid0", 00:13:56.990 "superblock": true, 00:13:56.990 "num_base_bdevs": 3, 00:13:56.990 "num_base_bdevs_discovered": 2, 00:13:56.990 "num_base_bdevs_operational": 2, 00:13:56.990 "base_bdevs_list": [ 00:13:56.990 { 00:13:56.990 "name": null, 00:13:56.990 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:56.990 "is_configured": false, 00:13:56.990 "data_offset": 0, 00:13:56.990 "data_size": 63488 00:13:56.990 }, 00:13:56.990 { 00:13:56.990 "name": "BaseBdev2", 00:13:56.990 "uuid": "aa5a8755-b297-4c1f-b617-744e60a75514", 00:13:56.990 "is_configured": true, 00:13:56.990 "data_offset": 2048, 00:13:56.990 "data_size": 63488 00:13:56.990 }, 00:13:56.990 { 00:13:56.990 "name": "BaseBdev3", 00:13:56.990 "uuid": "ad6f0c16-2009-4dec-9f48-25521d6a5571", 00:13:56.990 "is_configured": true, 00:13:56.990 "data_offset": 2048, 00:13:56.990 "data_size": 63488 00:13:56.990 } 00:13:56.990 ] 00:13:56.990 }' 00:13:56.990 13:33:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:56.990 13:33:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:57.256 13:33:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:13:57.256 13:33:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:57.256 13:33:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:57.256 13:33:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.256 13:33:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:57.256 13:33:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:57.256 13:33:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.256 13:33:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:57.256 13:33:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:57.256 13:33:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:13:57.256 13:33:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.256 13:33:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:57.515 [2024-11-20 13:33:56.745242] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:57.515 13:33:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.515 13:33:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:57.515 13:33:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:57.515 13:33:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:57.515 13:33:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.515 13:33:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:57.515 13:33:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:57.515 13:33:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.515 13:33:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:57.515 13:33:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:57.515 13:33:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:13:57.515 13:33:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.515 13:33:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:57.515 [2024-11-20 13:33:56.898103] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:57.515 [2024-11-20 13:33:56.898287] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:13:57.515 13:33:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.515 13:33:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:57.515 13:33:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:57.775 13:33:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:13:57.775 13:33:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:57.775 13:33:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.775 13:33:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:57.775 13:33:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.775 13:33:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:13:57.775 13:33:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:13:57.775 13:33:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:13:57.775 13:33:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:13:57.775 13:33:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:57.775 13:33:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:57.775 13:33:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.775 13:33:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:57.775 BaseBdev2 00:13:57.775 13:33:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.775 13:33:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:13:57.775 13:33:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:13:57.775 13:33:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:57.775 13:33:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:57.775 13:33:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:57.775 13:33:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:57.775 13:33:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:57.775 13:33:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.775 13:33:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:57.775 13:33:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.775 13:33:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:57.775 13:33:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.775 13:33:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:57.775 [ 00:13:57.775 { 00:13:57.775 "name": "BaseBdev2", 00:13:57.775 "aliases": [ 00:13:57.775 "fb8808e5-a587-4811-9d45-c62befe58546" 00:13:57.775 ], 00:13:57.775 "product_name": "Malloc disk", 00:13:57.775 "block_size": 512, 00:13:57.775 "num_blocks": 65536, 00:13:57.775 "uuid": "fb8808e5-a587-4811-9d45-c62befe58546", 00:13:57.775 "assigned_rate_limits": { 00:13:57.775 "rw_ios_per_sec": 0, 00:13:57.775 "rw_mbytes_per_sec": 0, 00:13:57.775 "r_mbytes_per_sec": 0, 00:13:57.775 "w_mbytes_per_sec": 0 00:13:57.775 }, 00:13:57.775 "claimed": false, 00:13:57.775 "zoned": false, 00:13:57.775 "supported_io_types": { 00:13:57.775 "read": true, 00:13:57.775 "write": true, 00:13:57.775 "unmap": true, 00:13:57.775 "flush": true, 00:13:57.775 "reset": true, 00:13:57.775 "nvme_admin": false, 00:13:57.775 "nvme_io": false, 00:13:57.775 "nvme_io_md": false, 00:13:57.775 "write_zeroes": true, 00:13:57.775 "zcopy": true, 00:13:57.775 "get_zone_info": false, 00:13:57.775 "zone_management": false, 00:13:57.775 "zone_append": false, 00:13:57.775 "compare": false, 00:13:57.775 "compare_and_write": false, 00:13:57.775 "abort": true, 00:13:57.775 "seek_hole": false, 00:13:57.775 "seek_data": false, 00:13:57.775 "copy": true, 00:13:57.775 "nvme_iov_md": false 00:13:57.775 }, 00:13:57.775 "memory_domains": [ 00:13:57.775 { 00:13:57.775 "dma_device_id": "system", 00:13:57.775 "dma_device_type": 1 00:13:57.775 }, 00:13:57.775 { 00:13:57.775 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:57.775 "dma_device_type": 2 00:13:57.775 } 00:13:57.775 ], 00:13:57.775 "driver_specific": {} 00:13:57.775 } 00:13:57.775 ] 00:13:57.775 13:33:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.775 13:33:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:57.775 13:33:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:57.775 13:33:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:57.775 13:33:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:13:57.775 13:33:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.775 13:33:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:57.775 BaseBdev3 00:13:57.775 13:33:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.775 13:33:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:13:57.775 13:33:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:13:57.775 13:33:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:57.775 13:33:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:57.775 13:33:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:57.775 13:33:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:57.775 13:33:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:57.775 13:33:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.775 13:33:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:57.775 13:33:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.775 13:33:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:57.775 13:33:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.775 13:33:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:57.775 [ 00:13:57.775 { 00:13:57.775 "name": "BaseBdev3", 00:13:57.775 "aliases": [ 00:13:57.775 "17078982-f047-4f27-9aa8-4876ee043f92" 00:13:57.775 ], 00:13:57.775 "product_name": "Malloc disk", 00:13:57.775 "block_size": 512, 00:13:57.775 "num_blocks": 65536, 00:13:57.775 "uuid": "17078982-f047-4f27-9aa8-4876ee043f92", 00:13:57.775 "assigned_rate_limits": { 00:13:57.775 "rw_ios_per_sec": 0, 00:13:57.775 "rw_mbytes_per_sec": 0, 00:13:57.775 "r_mbytes_per_sec": 0, 00:13:57.775 "w_mbytes_per_sec": 0 00:13:57.775 }, 00:13:57.775 "claimed": false, 00:13:57.775 "zoned": false, 00:13:57.775 "supported_io_types": { 00:13:57.776 "read": true, 00:13:57.776 "write": true, 00:13:57.776 "unmap": true, 00:13:57.776 "flush": true, 00:13:57.776 "reset": true, 00:13:57.776 "nvme_admin": false, 00:13:57.776 "nvme_io": false, 00:13:57.776 "nvme_io_md": false, 00:13:57.776 "write_zeroes": true, 00:13:57.776 "zcopy": true, 00:13:57.776 "get_zone_info": false, 00:13:57.776 "zone_management": false, 00:13:57.776 "zone_append": false, 00:13:57.776 "compare": false, 00:13:57.776 "compare_and_write": false, 00:13:57.776 "abort": true, 00:13:57.776 "seek_hole": false, 00:13:57.776 "seek_data": false, 00:13:57.776 "copy": true, 00:13:57.776 "nvme_iov_md": false 00:13:57.776 }, 00:13:57.776 "memory_domains": [ 00:13:57.776 { 00:13:57.776 "dma_device_id": "system", 00:13:57.776 "dma_device_type": 1 00:13:57.776 }, 00:13:57.776 { 00:13:57.776 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:57.776 "dma_device_type": 2 00:13:57.776 } 00:13:57.776 ], 00:13:57.776 "driver_specific": {} 00:13:57.776 } 00:13:57.776 ] 00:13:57.776 13:33:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.776 13:33:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:57.776 13:33:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:57.776 13:33:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:57.776 13:33:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:13:57.776 13:33:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.776 13:33:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:57.776 [2024-11-20 13:33:57.226069] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:57.776 [2024-11-20 13:33:57.226135] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:57.776 [2024-11-20 13:33:57.226160] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:57.776 [2024-11-20 13:33:57.228273] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:57.776 13:33:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.776 13:33:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:13:57.776 13:33:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:57.776 13:33:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:57.776 13:33:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:57.776 13:33:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:57.776 13:33:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:57.776 13:33:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:57.776 13:33:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:57.776 13:33:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:57.776 13:33:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:57.776 13:33:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:57.776 13:33:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.776 13:33:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:57.776 13:33:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:58.035 13:33:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.035 13:33:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:58.035 "name": "Existed_Raid", 00:13:58.035 "uuid": "b7b4b613-8ecf-4bd5-8e6a-0301b2ec102f", 00:13:58.035 "strip_size_kb": 64, 00:13:58.035 "state": "configuring", 00:13:58.035 "raid_level": "raid0", 00:13:58.035 "superblock": true, 00:13:58.035 "num_base_bdevs": 3, 00:13:58.035 "num_base_bdevs_discovered": 2, 00:13:58.035 "num_base_bdevs_operational": 3, 00:13:58.035 "base_bdevs_list": [ 00:13:58.036 { 00:13:58.036 "name": "BaseBdev1", 00:13:58.036 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:58.036 "is_configured": false, 00:13:58.036 "data_offset": 0, 00:13:58.036 "data_size": 0 00:13:58.036 }, 00:13:58.036 { 00:13:58.036 "name": "BaseBdev2", 00:13:58.036 "uuid": "fb8808e5-a587-4811-9d45-c62befe58546", 00:13:58.036 "is_configured": true, 00:13:58.036 "data_offset": 2048, 00:13:58.036 "data_size": 63488 00:13:58.036 }, 00:13:58.036 { 00:13:58.036 "name": "BaseBdev3", 00:13:58.036 "uuid": "17078982-f047-4f27-9aa8-4876ee043f92", 00:13:58.036 "is_configured": true, 00:13:58.036 "data_offset": 2048, 00:13:58.036 "data_size": 63488 00:13:58.036 } 00:13:58.036 ] 00:13:58.036 }' 00:13:58.036 13:33:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:58.036 13:33:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:58.295 13:33:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:13:58.295 13:33:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.295 13:33:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:58.295 [2024-11-20 13:33:57.617511] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:58.295 13:33:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.295 13:33:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:13:58.295 13:33:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:58.295 13:33:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:58.295 13:33:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:58.295 13:33:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:58.295 13:33:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:58.295 13:33:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:58.295 13:33:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:58.295 13:33:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:58.295 13:33:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:58.295 13:33:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:58.295 13:33:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:58.295 13:33:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.295 13:33:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:58.295 13:33:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.295 13:33:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:58.295 "name": "Existed_Raid", 00:13:58.295 "uuid": "b7b4b613-8ecf-4bd5-8e6a-0301b2ec102f", 00:13:58.295 "strip_size_kb": 64, 00:13:58.295 "state": "configuring", 00:13:58.295 "raid_level": "raid0", 00:13:58.295 "superblock": true, 00:13:58.295 "num_base_bdevs": 3, 00:13:58.295 "num_base_bdevs_discovered": 1, 00:13:58.295 "num_base_bdevs_operational": 3, 00:13:58.295 "base_bdevs_list": [ 00:13:58.295 { 00:13:58.295 "name": "BaseBdev1", 00:13:58.295 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:58.295 "is_configured": false, 00:13:58.295 "data_offset": 0, 00:13:58.295 "data_size": 0 00:13:58.295 }, 00:13:58.295 { 00:13:58.295 "name": null, 00:13:58.295 "uuid": "fb8808e5-a587-4811-9d45-c62befe58546", 00:13:58.295 "is_configured": false, 00:13:58.295 "data_offset": 0, 00:13:58.295 "data_size": 63488 00:13:58.295 }, 00:13:58.295 { 00:13:58.295 "name": "BaseBdev3", 00:13:58.295 "uuid": "17078982-f047-4f27-9aa8-4876ee043f92", 00:13:58.295 "is_configured": true, 00:13:58.295 "data_offset": 2048, 00:13:58.295 "data_size": 63488 00:13:58.295 } 00:13:58.295 ] 00:13:58.295 }' 00:13:58.295 13:33:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:58.295 13:33:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:58.554 13:33:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:58.554 13:33:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:58.554 13:33:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.554 13:33:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:58.554 13:33:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.554 13:33:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:13:58.554 13:33:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:58.554 13:33:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.554 13:33:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:58.814 [2024-11-20 13:33:58.051602] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:58.814 BaseBdev1 00:13:58.814 13:33:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.814 13:33:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:13:58.814 13:33:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:13:58.814 13:33:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:58.814 13:33:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:58.814 13:33:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:58.814 13:33:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:58.814 13:33:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:58.814 13:33:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.814 13:33:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:58.814 13:33:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.814 13:33:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:58.814 13:33:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.814 13:33:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:58.814 [ 00:13:58.814 { 00:13:58.814 "name": "BaseBdev1", 00:13:58.814 "aliases": [ 00:13:58.814 "51b69efe-c108-441a-b222-e3895a69a5f1" 00:13:58.814 ], 00:13:58.814 "product_name": "Malloc disk", 00:13:58.814 "block_size": 512, 00:13:58.814 "num_blocks": 65536, 00:13:58.814 "uuid": "51b69efe-c108-441a-b222-e3895a69a5f1", 00:13:58.814 "assigned_rate_limits": { 00:13:58.814 "rw_ios_per_sec": 0, 00:13:58.814 "rw_mbytes_per_sec": 0, 00:13:58.814 "r_mbytes_per_sec": 0, 00:13:58.814 "w_mbytes_per_sec": 0 00:13:58.814 }, 00:13:58.814 "claimed": true, 00:13:58.814 "claim_type": "exclusive_write", 00:13:58.814 "zoned": false, 00:13:58.814 "supported_io_types": { 00:13:58.814 "read": true, 00:13:58.814 "write": true, 00:13:58.814 "unmap": true, 00:13:58.814 "flush": true, 00:13:58.814 "reset": true, 00:13:58.814 "nvme_admin": false, 00:13:58.814 "nvme_io": false, 00:13:58.814 "nvme_io_md": false, 00:13:58.814 "write_zeroes": true, 00:13:58.814 "zcopy": true, 00:13:58.814 "get_zone_info": false, 00:13:58.814 "zone_management": false, 00:13:58.814 "zone_append": false, 00:13:58.814 "compare": false, 00:13:58.814 "compare_and_write": false, 00:13:58.814 "abort": true, 00:13:58.814 "seek_hole": false, 00:13:58.814 "seek_data": false, 00:13:58.814 "copy": true, 00:13:58.814 "nvme_iov_md": false 00:13:58.814 }, 00:13:58.814 "memory_domains": [ 00:13:58.814 { 00:13:58.814 "dma_device_id": "system", 00:13:58.814 "dma_device_type": 1 00:13:58.814 }, 00:13:58.814 { 00:13:58.814 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:58.814 "dma_device_type": 2 00:13:58.814 } 00:13:58.814 ], 00:13:58.814 "driver_specific": {} 00:13:58.814 } 00:13:58.814 ] 00:13:58.814 13:33:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.814 13:33:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:58.814 13:33:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:13:58.814 13:33:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:58.814 13:33:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:58.814 13:33:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:58.814 13:33:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:58.814 13:33:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:58.814 13:33:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:58.814 13:33:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:58.814 13:33:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:58.814 13:33:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:58.814 13:33:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:58.814 13:33:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.814 13:33:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:58.814 13:33:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:58.814 13:33:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.814 13:33:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:58.814 "name": "Existed_Raid", 00:13:58.814 "uuid": "b7b4b613-8ecf-4bd5-8e6a-0301b2ec102f", 00:13:58.814 "strip_size_kb": 64, 00:13:58.814 "state": "configuring", 00:13:58.814 "raid_level": "raid0", 00:13:58.814 "superblock": true, 00:13:58.814 "num_base_bdevs": 3, 00:13:58.814 "num_base_bdevs_discovered": 2, 00:13:58.814 "num_base_bdevs_operational": 3, 00:13:58.814 "base_bdevs_list": [ 00:13:58.814 { 00:13:58.814 "name": "BaseBdev1", 00:13:58.814 "uuid": "51b69efe-c108-441a-b222-e3895a69a5f1", 00:13:58.814 "is_configured": true, 00:13:58.814 "data_offset": 2048, 00:13:58.814 "data_size": 63488 00:13:58.814 }, 00:13:58.814 { 00:13:58.814 "name": null, 00:13:58.815 "uuid": "fb8808e5-a587-4811-9d45-c62befe58546", 00:13:58.815 "is_configured": false, 00:13:58.815 "data_offset": 0, 00:13:58.815 "data_size": 63488 00:13:58.815 }, 00:13:58.815 { 00:13:58.815 "name": "BaseBdev3", 00:13:58.815 "uuid": "17078982-f047-4f27-9aa8-4876ee043f92", 00:13:58.815 "is_configured": true, 00:13:58.815 "data_offset": 2048, 00:13:58.815 "data_size": 63488 00:13:58.815 } 00:13:58.815 ] 00:13:58.815 }' 00:13:58.815 13:33:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:58.815 13:33:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:59.074 13:33:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:59.074 13:33:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.074 13:33:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:59.074 13:33:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:59.074 13:33:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.074 13:33:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:13:59.074 13:33:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:13:59.074 13:33:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.074 13:33:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:59.334 [2024-11-20 13:33:58.563016] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:59.334 13:33:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.334 13:33:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:13:59.334 13:33:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:59.334 13:33:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:59.334 13:33:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:59.334 13:33:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:59.334 13:33:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:59.334 13:33:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:59.334 13:33:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:59.334 13:33:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:59.334 13:33:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:59.334 13:33:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:59.334 13:33:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:59.334 13:33:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.334 13:33:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:59.334 13:33:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.334 13:33:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:59.334 "name": "Existed_Raid", 00:13:59.334 "uuid": "b7b4b613-8ecf-4bd5-8e6a-0301b2ec102f", 00:13:59.334 "strip_size_kb": 64, 00:13:59.334 "state": "configuring", 00:13:59.334 "raid_level": "raid0", 00:13:59.334 "superblock": true, 00:13:59.334 "num_base_bdevs": 3, 00:13:59.334 "num_base_bdevs_discovered": 1, 00:13:59.334 "num_base_bdevs_operational": 3, 00:13:59.334 "base_bdevs_list": [ 00:13:59.334 { 00:13:59.334 "name": "BaseBdev1", 00:13:59.334 "uuid": "51b69efe-c108-441a-b222-e3895a69a5f1", 00:13:59.334 "is_configured": true, 00:13:59.334 "data_offset": 2048, 00:13:59.334 "data_size": 63488 00:13:59.334 }, 00:13:59.334 { 00:13:59.334 "name": null, 00:13:59.334 "uuid": "fb8808e5-a587-4811-9d45-c62befe58546", 00:13:59.334 "is_configured": false, 00:13:59.334 "data_offset": 0, 00:13:59.334 "data_size": 63488 00:13:59.334 }, 00:13:59.334 { 00:13:59.334 "name": null, 00:13:59.334 "uuid": "17078982-f047-4f27-9aa8-4876ee043f92", 00:13:59.334 "is_configured": false, 00:13:59.334 "data_offset": 0, 00:13:59.334 "data_size": 63488 00:13:59.334 } 00:13:59.334 ] 00:13:59.334 }' 00:13:59.334 13:33:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:59.334 13:33:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:59.594 13:33:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:59.594 13:33:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.594 13:33:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:59.594 13:33:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:59.594 13:33:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.594 13:33:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:13:59.594 13:33:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:13:59.594 13:33:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.594 13:33:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:59.594 [2024-11-20 13:33:59.002422] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:59.594 13:33:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.594 13:33:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:13:59.594 13:33:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:59.594 13:33:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:59.594 13:33:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:59.594 13:33:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:59.594 13:33:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:59.594 13:33:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:59.594 13:33:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:59.594 13:33:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:59.594 13:33:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:59.594 13:33:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:59.594 13:33:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.594 13:33:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:59.594 13:33:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:59.594 13:33:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.594 13:33:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:59.594 "name": "Existed_Raid", 00:13:59.594 "uuid": "b7b4b613-8ecf-4bd5-8e6a-0301b2ec102f", 00:13:59.594 "strip_size_kb": 64, 00:13:59.594 "state": "configuring", 00:13:59.594 "raid_level": "raid0", 00:13:59.594 "superblock": true, 00:13:59.594 "num_base_bdevs": 3, 00:13:59.594 "num_base_bdevs_discovered": 2, 00:13:59.594 "num_base_bdevs_operational": 3, 00:13:59.594 "base_bdevs_list": [ 00:13:59.594 { 00:13:59.594 "name": "BaseBdev1", 00:13:59.594 "uuid": "51b69efe-c108-441a-b222-e3895a69a5f1", 00:13:59.594 "is_configured": true, 00:13:59.594 "data_offset": 2048, 00:13:59.594 "data_size": 63488 00:13:59.594 }, 00:13:59.594 { 00:13:59.594 "name": null, 00:13:59.594 "uuid": "fb8808e5-a587-4811-9d45-c62befe58546", 00:13:59.594 "is_configured": false, 00:13:59.594 "data_offset": 0, 00:13:59.594 "data_size": 63488 00:13:59.594 }, 00:13:59.594 { 00:13:59.594 "name": "BaseBdev3", 00:13:59.594 "uuid": "17078982-f047-4f27-9aa8-4876ee043f92", 00:13:59.594 "is_configured": true, 00:13:59.594 "data_offset": 2048, 00:13:59.594 "data_size": 63488 00:13:59.594 } 00:13:59.594 ] 00:13:59.594 }' 00:13:59.594 13:33:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:59.594 13:33:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:00.173 13:33:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:00.173 13:33:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.173 13:33:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:00.173 13:33:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:00.173 13:33:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.173 13:33:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:14:00.173 13:33:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:00.173 13:33:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.173 13:33:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:00.173 [2024-11-20 13:33:59.470472] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:00.173 13:33:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.173 13:33:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:14:00.173 13:33:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:00.173 13:33:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:00.173 13:33:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:00.173 13:33:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:00.173 13:33:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:00.173 13:33:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:00.173 13:33:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:00.173 13:33:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:00.173 13:33:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:00.173 13:33:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:00.173 13:33:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.173 13:33:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:00.173 13:33:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:00.173 13:33:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.173 13:33:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:00.173 "name": "Existed_Raid", 00:14:00.173 "uuid": "b7b4b613-8ecf-4bd5-8e6a-0301b2ec102f", 00:14:00.173 "strip_size_kb": 64, 00:14:00.173 "state": "configuring", 00:14:00.173 "raid_level": "raid0", 00:14:00.173 "superblock": true, 00:14:00.173 "num_base_bdevs": 3, 00:14:00.173 "num_base_bdevs_discovered": 1, 00:14:00.173 "num_base_bdevs_operational": 3, 00:14:00.173 "base_bdevs_list": [ 00:14:00.173 { 00:14:00.173 "name": null, 00:14:00.173 "uuid": "51b69efe-c108-441a-b222-e3895a69a5f1", 00:14:00.173 "is_configured": false, 00:14:00.173 "data_offset": 0, 00:14:00.173 "data_size": 63488 00:14:00.173 }, 00:14:00.173 { 00:14:00.173 "name": null, 00:14:00.173 "uuid": "fb8808e5-a587-4811-9d45-c62befe58546", 00:14:00.173 "is_configured": false, 00:14:00.173 "data_offset": 0, 00:14:00.173 "data_size": 63488 00:14:00.173 }, 00:14:00.173 { 00:14:00.173 "name": "BaseBdev3", 00:14:00.173 "uuid": "17078982-f047-4f27-9aa8-4876ee043f92", 00:14:00.173 "is_configured": true, 00:14:00.173 "data_offset": 2048, 00:14:00.173 "data_size": 63488 00:14:00.173 } 00:14:00.173 ] 00:14:00.173 }' 00:14:00.173 13:33:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:00.173 13:33:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:00.741 13:33:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:00.741 13:33:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.741 13:33:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:00.741 13:33:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:00.741 13:34:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.741 13:34:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:14:00.741 13:34:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:14:00.741 13:34:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.741 13:34:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:00.741 [2024-11-20 13:34:00.047130] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:00.741 13:34:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.741 13:34:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:14:00.741 13:34:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:00.741 13:34:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:00.741 13:34:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:00.741 13:34:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:00.741 13:34:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:00.741 13:34:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:00.741 13:34:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:00.741 13:34:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:00.741 13:34:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:00.741 13:34:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:00.741 13:34:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:00.741 13:34:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.741 13:34:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:00.741 13:34:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.741 13:34:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:00.741 "name": "Existed_Raid", 00:14:00.741 "uuid": "b7b4b613-8ecf-4bd5-8e6a-0301b2ec102f", 00:14:00.741 "strip_size_kb": 64, 00:14:00.741 "state": "configuring", 00:14:00.741 "raid_level": "raid0", 00:14:00.741 "superblock": true, 00:14:00.741 "num_base_bdevs": 3, 00:14:00.741 "num_base_bdevs_discovered": 2, 00:14:00.741 "num_base_bdevs_operational": 3, 00:14:00.741 "base_bdevs_list": [ 00:14:00.741 { 00:14:00.741 "name": null, 00:14:00.741 "uuid": "51b69efe-c108-441a-b222-e3895a69a5f1", 00:14:00.741 "is_configured": false, 00:14:00.741 "data_offset": 0, 00:14:00.741 "data_size": 63488 00:14:00.741 }, 00:14:00.741 { 00:14:00.741 "name": "BaseBdev2", 00:14:00.741 "uuid": "fb8808e5-a587-4811-9d45-c62befe58546", 00:14:00.741 "is_configured": true, 00:14:00.741 "data_offset": 2048, 00:14:00.741 "data_size": 63488 00:14:00.741 }, 00:14:00.741 { 00:14:00.741 "name": "BaseBdev3", 00:14:00.741 "uuid": "17078982-f047-4f27-9aa8-4876ee043f92", 00:14:00.741 "is_configured": true, 00:14:00.741 "data_offset": 2048, 00:14:00.741 "data_size": 63488 00:14:00.741 } 00:14:00.741 ] 00:14:00.741 }' 00:14:00.741 13:34:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:00.741 13:34:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:01.309 13:34:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:01.309 13:34:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.309 13:34:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:01.309 13:34:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:01.309 13:34:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.309 13:34:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:14:01.309 13:34:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:14:01.309 13:34:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:01.309 13:34:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.309 13:34:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:01.309 13:34:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.309 13:34:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 51b69efe-c108-441a-b222-e3895a69a5f1 00:14:01.309 13:34:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.309 13:34:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:01.309 [2024-11-20 13:34:00.596350] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:14:01.309 [2024-11-20 13:34:00.596568] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:14:01.309 [2024-11-20 13:34:00.596586] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:14:01.309 [2024-11-20 13:34:00.596843] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:14:01.309 [2024-11-20 13:34:00.596990] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:14:01.309 [2024-11-20 13:34:00.597000] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:14:01.309 [2024-11-20 13:34:00.597159] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:01.309 NewBaseBdev 00:14:01.309 13:34:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.309 13:34:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:14:01.309 13:34:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:14:01.309 13:34:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:01.309 13:34:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:01.309 13:34:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:01.309 13:34:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:01.309 13:34:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:01.309 13:34:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.309 13:34:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:01.309 13:34:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.309 13:34:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:14:01.309 13:34:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.309 13:34:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:01.309 [ 00:14:01.309 { 00:14:01.309 "name": "NewBaseBdev", 00:14:01.309 "aliases": [ 00:14:01.309 "51b69efe-c108-441a-b222-e3895a69a5f1" 00:14:01.309 ], 00:14:01.309 "product_name": "Malloc disk", 00:14:01.309 "block_size": 512, 00:14:01.309 "num_blocks": 65536, 00:14:01.309 "uuid": "51b69efe-c108-441a-b222-e3895a69a5f1", 00:14:01.310 "assigned_rate_limits": { 00:14:01.310 "rw_ios_per_sec": 0, 00:14:01.310 "rw_mbytes_per_sec": 0, 00:14:01.310 "r_mbytes_per_sec": 0, 00:14:01.310 "w_mbytes_per_sec": 0 00:14:01.310 }, 00:14:01.310 "claimed": true, 00:14:01.310 "claim_type": "exclusive_write", 00:14:01.310 "zoned": false, 00:14:01.310 "supported_io_types": { 00:14:01.310 "read": true, 00:14:01.310 "write": true, 00:14:01.310 "unmap": true, 00:14:01.310 "flush": true, 00:14:01.310 "reset": true, 00:14:01.310 "nvme_admin": false, 00:14:01.310 "nvme_io": false, 00:14:01.310 "nvme_io_md": false, 00:14:01.310 "write_zeroes": true, 00:14:01.310 "zcopy": true, 00:14:01.310 "get_zone_info": false, 00:14:01.310 "zone_management": false, 00:14:01.310 "zone_append": false, 00:14:01.310 "compare": false, 00:14:01.310 "compare_and_write": false, 00:14:01.310 "abort": true, 00:14:01.310 "seek_hole": false, 00:14:01.310 "seek_data": false, 00:14:01.310 "copy": true, 00:14:01.310 "nvme_iov_md": false 00:14:01.310 }, 00:14:01.310 "memory_domains": [ 00:14:01.310 { 00:14:01.310 "dma_device_id": "system", 00:14:01.310 "dma_device_type": 1 00:14:01.310 }, 00:14:01.310 { 00:14:01.310 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:01.310 "dma_device_type": 2 00:14:01.310 } 00:14:01.310 ], 00:14:01.310 "driver_specific": {} 00:14:01.310 } 00:14:01.310 ] 00:14:01.310 13:34:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.310 13:34:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:01.310 13:34:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:14:01.310 13:34:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:01.310 13:34:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:01.310 13:34:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:01.310 13:34:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:01.310 13:34:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:01.310 13:34:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:01.310 13:34:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:01.310 13:34:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:01.310 13:34:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:01.310 13:34:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:01.310 13:34:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:01.310 13:34:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.310 13:34:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:01.310 13:34:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.310 13:34:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:01.310 "name": "Existed_Raid", 00:14:01.310 "uuid": "b7b4b613-8ecf-4bd5-8e6a-0301b2ec102f", 00:14:01.310 "strip_size_kb": 64, 00:14:01.310 "state": "online", 00:14:01.310 "raid_level": "raid0", 00:14:01.310 "superblock": true, 00:14:01.310 "num_base_bdevs": 3, 00:14:01.310 "num_base_bdevs_discovered": 3, 00:14:01.310 "num_base_bdevs_operational": 3, 00:14:01.310 "base_bdevs_list": [ 00:14:01.310 { 00:14:01.310 "name": "NewBaseBdev", 00:14:01.310 "uuid": "51b69efe-c108-441a-b222-e3895a69a5f1", 00:14:01.310 "is_configured": true, 00:14:01.310 "data_offset": 2048, 00:14:01.310 "data_size": 63488 00:14:01.310 }, 00:14:01.310 { 00:14:01.310 "name": "BaseBdev2", 00:14:01.310 "uuid": "fb8808e5-a587-4811-9d45-c62befe58546", 00:14:01.310 "is_configured": true, 00:14:01.310 "data_offset": 2048, 00:14:01.310 "data_size": 63488 00:14:01.310 }, 00:14:01.310 { 00:14:01.310 "name": "BaseBdev3", 00:14:01.310 "uuid": "17078982-f047-4f27-9aa8-4876ee043f92", 00:14:01.310 "is_configured": true, 00:14:01.310 "data_offset": 2048, 00:14:01.310 "data_size": 63488 00:14:01.310 } 00:14:01.310 ] 00:14:01.310 }' 00:14:01.310 13:34:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:01.310 13:34:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:01.569 13:34:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:14:01.569 13:34:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:01.569 13:34:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:01.569 13:34:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:01.569 13:34:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:14:01.569 13:34:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:01.569 13:34:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:01.569 13:34:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:01.569 13:34:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.569 13:34:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:01.569 [2024-11-20 13:34:01.008160] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:01.569 13:34:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.569 13:34:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:01.569 "name": "Existed_Raid", 00:14:01.569 "aliases": [ 00:14:01.569 "b7b4b613-8ecf-4bd5-8e6a-0301b2ec102f" 00:14:01.569 ], 00:14:01.569 "product_name": "Raid Volume", 00:14:01.569 "block_size": 512, 00:14:01.569 "num_blocks": 190464, 00:14:01.569 "uuid": "b7b4b613-8ecf-4bd5-8e6a-0301b2ec102f", 00:14:01.569 "assigned_rate_limits": { 00:14:01.569 "rw_ios_per_sec": 0, 00:14:01.569 "rw_mbytes_per_sec": 0, 00:14:01.569 "r_mbytes_per_sec": 0, 00:14:01.569 "w_mbytes_per_sec": 0 00:14:01.569 }, 00:14:01.569 "claimed": false, 00:14:01.569 "zoned": false, 00:14:01.569 "supported_io_types": { 00:14:01.569 "read": true, 00:14:01.569 "write": true, 00:14:01.569 "unmap": true, 00:14:01.569 "flush": true, 00:14:01.569 "reset": true, 00:14:01.569 "nvme_admin": false, 00:14:01.569 "nvme_io": false, 00:14:01.569 "nvme_io_md": false, 00:14:01.569 "write_zeroes": true, 00:14:01.569 "zcopy": false, 00:14:01.569 "get_zone_info": false, 00:14:01.570 "zone_management": false, 00:14:01.570 "zone_append": false, 00:14:01.570 "compare": false, 00:14:01.570 "compare_and_write": false, 00:14:01.570 "abort": false, 00:14:01.570 "seek_hole": false, 00:14:01.570 "seek_data": false, 00:14:01.570 "copy": false, 00:14:01.570 "nvme_iov_md": false 00:14:01.570 }, 00:14:01.570 "memory_domains": [ 00:14:01.570 { 00:14:01.570 "dma_device_id": "system", 00:14:01.570 "dma_device_type": 1 00:14:01.570 }, 00:14:01.570 { 00:14:01.570 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:01.570 "dma_device_type": 2 00:14:01.570 }, 00:14:01.570 { 00:14:01.570 "dma_device_id": "system", 00:14:01.570 "dma_device_type": 1 00:14:01.570 }, 00:14:01.570 { 00:14:01.570 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:01.570 "dma_device_type": 2 00:14:01.570 }, 00:14:01.570 { 00:14:01.570 "dma_device_id": "system", 00:14:01.570 "dma_device_type": 1 00:14:01.570 }, 00:14:01.570 { 00:14:01.570 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:01.570 "dma_device_type": 2 00:14:01.570 } 00:14:01.570 ], 00:14:01.570 "driver_specific": { 00:14:01.570 "raid": { 00:14:01.570 "uuid": "b7b4b613-8ecf-4bd5-8e6a-0301b2ec102f", 00:14:01.570 "strip_size_kb": 64, 00:14:01.570 "state": "online", 00:14:01.570 "raid_level": "raid0", 00:14:01.570 "superblock": true, 00:14:01.570 "num_base_bdevs": 3, 00:14:01.570 "num_base_bdevs_discovered": 3, 00:14:01.570 "num_base_bdevs_operational": 3, 00:14:01.570 "base_bdevs_list": [ 00:14:01.570 { 00:14:01.570 "name": "NewBaseBdev", 00:14:01.570 "uuid": "51b69efe-c108-441a-b222-e3895a69a5f1", 00:14:01.570 "is_configured": true, 00:14:01.570 "data_offset": 2048, 00:14:01.570 "data_size": 63488 00:14:01.570 }, 00:14:01.570 { 00:14:01.570 "name": "BaseBdev2", 00:14:01.570 "uuid": "fb8808e5-a587-4811-9d45-c62befe58546", 00:14:01.570 "is_configured": true, 00:14:01.570 "data_offset": 2048, 00:14:01.570 "data_size": 63488 00:14:01.570 }, 00:14:01.570 { 00:14:01.570 "name": "BaseBdev3", 00:14:01.570 "uuid": "17078982-f047-4f27-9aa8-4876ee043f92", 00:14:01.570 "is_configured": true, 00:14:01.570 "data_offset": 2048, 00:14:01.570 "data_size": 63488 00:14:01.570 } 00:14:01.570 ] 00:14:01.570 } 00:14:01.570 } 00:14:01.570 }' 00:14:01.570 13:34:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:01.829 13:34:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:14:01.829 BaseBdev2 00:14:01.829 BaseBdev3' 00:14:01.829 13:34:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:01.829 13:34:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:01.829 13:34:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:01.829 13:34:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:14:01.829 13:34:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:01.829 13:34:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.829 13:34:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:01.829 13:34:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.829 13:34:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:01.829 13:34:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:01.829 13:34:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:01.829 13:34:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:01.829 13:34:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:01.829 13:34:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.829 13:34:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:01.829 13:34:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.829 13:34:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:01.829 13:34:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:01.829 13:34:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:01.829 13:34:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:01.829 13:34:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.829 13:34:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:01.829 13:34:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:01.829 13:34:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.829 13:34:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:01.829 13:34:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:01.829 13:34:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:01.829 13:34:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.829 13:34:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:01.829 [2024-11-20 13:34:01.275486] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:01.829 [2024-11-20 13:34:01.275522] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:01.829 [2024-11-20 13:34:01.275609] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:01.829 [2024-11-20 13:34:01.275663] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:01.829 [2024-11-20 13:34:01.275678] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:14:01.829 13:34:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.829 13:34:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 64204 00:14:01.829 13:34:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 64204 ']' 00:14:01.829 13:34:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 64204 00:14:01.829 13:34:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:14:01.829 13:34:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:01.829 13:34:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64204 00:14:02.088 killing process with pid 64204 00:14:02.088 13:34:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:02.088 13:34:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:02.088 13:34:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64204' 00:14:02.088 13:34:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 64204 00:14:02.088 [2024-11-20 13:34:01.315217] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:02.088 13:34:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 64204 00:14:02.347 [2024-11-20 13:34:01.621027] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:03.283 13:34:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:14:03.283 00:14:03.283 real 0m10.020s 00:14:03.283 user 0m15.809s 00:14:03.283 sys 0m1.989s 00:14:03.283 13:34:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:03.283 13:34:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:03.283 ************************************ 00:14:03.283 END TEST raid_state_function_test_sb 00:14:03.283 ************************************ 00:14:03.586 13:34:02 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 3 00:14:03.586 13:34:02 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:14:03.586 13:34:02 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:03.586 13:34:02 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:03.586 ************************************ 00:14:03.586 START TEST raid_superblock_test 00:14:03.586 ************************************ 00:14:03.586 13:34:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid0 3 00:14:03.586 13:34:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:14:03.586 13:34:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:14:03.586 13:34:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:14:03.586 13:34:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:14:03.586 13:34:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:14:03.586 13:34:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:14:03.586 13:34:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:14:03.586 13:34:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:14:03.586 13:34:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:14:03.586 13:34:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:14:03.586 13:34:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:14:03.586 13:34:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:14:03.586 13:34:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:14:03.586 13:34:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:14:03.586 13:34:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:14:03.586 13:34:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:14:03.586 13:34:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=64824 00:14:03.586 13:34:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:14:03.586 13:34:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 64824 00:14:03.586 13:34:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 64824 ']' 00:14:03.586 13:34:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:03.586 13:34:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:03.586 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:03.586 13:34:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:03.586 13:34:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:03.586 13:34:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.586 [2024-11-20 13:34:02.943248] Starting SPDK v25.01-pre git sha1 82b85d9ca / DPDK 24.03.0 initialization... 00:14:03.586 [2024-11-20 13:34:02.943907] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64824 ] 00:14:03.847 [2024-11-20 13:34:03.117390] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:03.847 [2024-11-20 13:34:03.233690] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:04.107 [2024-11-20 13:34:03.446497] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:04.107 [2024-11-20 13:34:03.446567] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:04.368 13:34:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:04.368 13:34:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:14:04.368 13:34:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:14:04.368 13:34:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:04.368 13:34:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:14:04.368 13:34:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:14:04.368 13:34:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:14:04.368 13:34:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:04.368 13:34:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:04.368 13:34:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:04.368 13:34:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:14:04.368 13:34:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.368 13:34:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.368 malloc1 00:14:04.368 13:34:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.368 13:34:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:04.368 13:34:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.368 13:34:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.368 [2024-11-20 13:34:03.834941] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:04.368 [2024-11-20 13:34:03.835010] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:04.368 [2024-11-20 13:34:03.835034] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:04.368 [2024-11-20 13:34:03.835046] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:04.368 [2024-11-20 13:34:03.837449] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:04.368 [2024-11-20 13:34:03.837489] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:04.368 pt1 00:14:04.368 13:34:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.368 13:34:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:04.368 13:34:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:04.368 13:34:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:14:04.368 13:34:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:14:04.368 13:34:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:14:04.368 13:34:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:04.368 13:34:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:04.368 13:34:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:04.368 13:34:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:14:04.368 13:34:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.368 13:34:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.629 malloc2 00:14:04.629 13:34:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.629 13:34:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:04.629 13:34:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.629 13:34:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.629 [2024-11-20 13:34:03.897927] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:04.629 [2024-11-20 13:34:03.897991] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:04.629 [2024-11-20 13:34:03.898021] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:04.629 [2024-11-20 13:34:03.898033] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:04.629 [2024-11-20 13:34:03.900363] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:04.629 [2024-11-20 13:34:03.900402] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:04.629 pt2 00:14:04.629 13:34:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.629 13:34:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:04.629 13:34:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:04.629 13:34:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:14:04.629 13:34:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:14:04.629 13:34:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:14:04.629 13:34:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:04.629 13:34:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:04.629 13:34:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:04.629 13:34:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:14:04.629 13:34:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.629 13:34:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.629 malloc3 00:14:04.629 13:34:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.629 13:34:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:04.629 13:34:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.629 13:34:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.629 [2024-11-20 13:34:03.970159] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:04.629 [2024-11-20 13:34:03.970235] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:04.629 [2024-11-20 13:34:03.970260] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:14:04.629 [2024-11-20 13:34:03.970280] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:04.629 [2024-11-20 13:34:03.972707] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:04.629 [2024-11-20 13:34:03.972748] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:04.629 pt3 00:14:04.629 13:34:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.629 13:34:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:04.629 13:34:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:04.629 13:34:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:14:04.629 13:34:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.629 13:34:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.629 [2024-11-20 13:34:03.982197] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:04.629 [2024-11-20 13:34:03.984349] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:04.629 [2024-11-20 13:34:03.984422] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:04.629 [2024-11-20 13:34:03.984585] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:14:04.629 [2024-11-20 13:34:03.984601] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:14:04.629 [2024-11-20 13:34:03.984881] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:14:04.629 [2024-11-20 13:34:03.985023] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:14:04.629 [2024-11-20 13:34:03.985043] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:14:04.629 [2024-11-20 13:34:03.985204] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:04.629 13:34:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.629 13:34:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:14:04.629 13:34:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:04.629 13:34:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:04.629 13:34:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:04.629 13:34:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:04.629 13:34:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:04.629 13:34:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:04.629 13:34:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:04.629 13:34:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:04.629 13:34:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:04.629 13:34:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:04.629 13:34:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.629 13:34:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:04.629 13:34:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.629 13:34:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.629 13:34:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:04.629 "name": "raid_bdev1", 00:14:04.629 "uuid": "a6bad519-fd00-443b-b3c8-e0bbef079372", 00:14:04.629 "strip_size_kb": 64, 00:14:04.629 "state": "online", 00:14:04.629 "raid_level": "raid0", 00:14:04.629 "superblock": true, 00:14:04.629 "num_base_bdevs": 3, 00:14:04.629 "num_base_bdevs_discovered": 3, 00:14:04.629 "num_base_bdevs_operational": 3, 00:14:04.629 "base_bdevs_list": [ 00:14:04.629 { 00:14:04.629 "name": "pt1", 00:14:04.629 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:04.629 "is_configured": true, 00:14:04.629 "data_offset": 2048, 00:14:04.629 "data_size": 63488 00:14:04.629 }, 00:14:04.629 { 00:14:04.629 "name": "pt2", 00:14:04.629 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:04.629 "is_configured": true, 00:14:04.629 "data_offset": 2048, 00:14:04.629 "data_size": 63488 00:14:04.629 }, 00:14:04.629 { 00:14:04.629 "name": "pt3", 00:14:04.629 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:04.629 "is_configured": true, 00:14:04.629 "data_offset": 2048, 00:14:04.629 "data_size": 63488 00:14:04.629 } 00:14:04.629 ] 00:14:04.629 }' 00:14:04.629 13:34:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:04.629 13:34:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.199 13:34:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:14:05.199 13:34:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:14:05.199 13:34:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:05.199 13:34:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:05.199 13:34:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:05.199 13:34:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:05.199 13:34:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:05.199 13:34:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:05.199 13:34:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.199 13:34:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.199 [2024-11-20 13:34:04.405846] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:05.199 13:34:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.199 13:34:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:05.199 "name": "raid_bdev1", 00:14:05.199 "aliases": [ 00:14:05.199 "a6bad519-fd00-443b-b3c8-e0bbef079372" 00:14:05.199 ], 00:14:05.199 "product_name": "Raid Volume", 00:14:05.199 "block_size": 512, 00:14:05.199 "num_blocks": 190464, 00:14:05.199 "uuid": "a6bad519-fd00-443b-b3c8-e0bbef079372", 00:14:05.199 "assigned_rate_limits": { 00:14:05.199 "rw_ios_per_sec": 0, 00:14:05.199 "rw_mbytes_per_sec": 0, 00:14:05.199 "r_mbytes_per_sec": 0, 00:14:05.199 "w_mbytes_per_sec": 0 00:14:05.199 }, 00:14:05.199 "claimed": false, 00:14:05.199 "zoned": false, 00:14:05.199 "supported_io_types": { 00:14:05.199 "read": true, 00:14:05.199 "write": true, 00:14:05.199 "unmap": true, 00:14:05.199 "flush": true, 00:14:05.199 "reset": true, 00:14:05.199 "nvme_admin": false, 00:14:05.199 "nvme_io": false, 00:14:05.199 "nvme_io_md": false, 00:14:05.199 "write_zeroes": true, 00:14:05.199 "zcopy": false, 00:14:05.199 "get_zone_info": false, 00:14:05.199 "zone_management": false, 00:14:05.199 "zone_append": false, 00:14:05.199 "compare": false, 00:14:05.199 "compare_and_write": false, 00:14:05.199 "abort": false, 00:14:05.199 "seek_hole": false, 00:14:05.199 "seek_data": false, 00:14:05.199 "copy": false, 00:14:05.199 "nvme_iov_md": false 00:14:05.199 }, 00:14:05.199 "memory_domains": [ 00:14:05.199 { 00:14:05.199 "dma_device_id": "system", 00:14:05.199 "dma_device_type": 1 00:14:05.199 }, 00:14:05.199 { 00:14:05.199 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:05.199 "dma_device_type": 2 00:14:05.199 }, 00:14:05.199 { 00:14:05.199 "dma_device_id": "system", 00:14:05.199 "dma_device_type": 1 00:14:05.199 }, 00:14:05.199 { 00:14:05.199 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:05.199 "dma_device_type": 2 00:14:05.199 }, 00:14:05.199 { 00:14:05.199 "dma_device_id": "system", 00:14:05.199 "dma_device_type": 1 00:14:05.199 }, 00:14:05.199 { 00:14:05.199 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:05.199 "dma_device_type": 2 00:14:05.199 } 00:14:05.199 ], 00:14:05.199 "driver_specific": { 00:14:05.199 "raid": { 00:14:05.199 "uuid": "a6bad519-fd00-443b-b3c8-e0bbef079372", 00:14:05.199 "strip_size_kb": 64, 00:14:05.199 "state": "online", 00:14:05.199 "raid_level": "raid0", 00:14:05.199 "superblock": true, 00:14:05.199 "num_base_bdevs": 3, 00:14:05.199 "num_base_bdevs_discovered": 3, 00:14:05.199 "num_base_bdevs_operational": 3, 00:14:05.199 "base_bdevs_list": [ 00:14:05.199 { 00:14:05.199 "name": "pt1", 00:14:05.199 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:05.199 "is_configured": true, 00:14:05.199 "data_offset": 2048, 00:14:05.199 "data_size": 63488 00:14:05.199 }, 00:14:05.199 { 00:14:05.199 "name": "pt2", 00:14:05.199 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:05.199 "is_configured": true, 00:14:05.199 "data_offset": 2048, 00:14:05.199 "data_size": 63488 00:14:05.199 }, 00:14:05.199 { 00:14:05.199 "name": "pt3", 00:14:05.199 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:05.199 "is_configured": true, 00:14:05.199 "data_offset": 2048, 00:14:05.199 "data_size": 63488 00:14:05.199 } 00:14:05.199 ] 00:14:05.199 } 00:14:05.199 } 00:14:05.199 }' 00:14:05.199 13:34:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:05.199 13:34:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:14:05.199 pt2 00:14:05.199 pt3' 00:14:05.199 13:34:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:05.199 13:34:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:05.199 13:34:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:05.199 13:34:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:14:05.199 13:34:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:05.199 13:34:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.200 13:34:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.200 13:34:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.200 13:34:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:05.200 13:34:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:05.200 13:34:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:05.200 13:34:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:14:05.200 13:34:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.200 13:34:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.200 13:34:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:05.200 13:34:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.200 13:34:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:05.200 13:34:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:05.200 13:34:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:05.200 13:34:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:14:05.200 13:34:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:05.200 13:34:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.200 13:34:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.200 13:34:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.200 13:34:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:05.200 13:34:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:05.200 13:34:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:05.200 13:34:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.200 13:34:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.200 13:34:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:14:05.200 [2024-11-20 13:34:04.681437] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:05.500 13:34:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.500 13:34:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=a6bad519-fd00-443b-b3c8-e0bbef079372 00:14:05.500 13:34:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z a6bad519-fd00-443b-b3c8-e0bbef079372 ']' 00:14:05.500 13:34:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:05.500 13:34:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.500 13:34:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.500 [2024-11-20 13:34:04.725106] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:05.500 [2024-11-20 13:34:04.725142] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:05.500 [2024-11-20 13:34:04.725226] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:05.500 [2024-11-20 13:34:04.725288] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:05.500 [2024-11-20 13:34:04.725299] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:14:05.500 13:34:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.500 13:34:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:05.500 13:34:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:14:05.500 13:34:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.500 13:34:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.500 13:34:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.500 13:34:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:14:05.500 13:34:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:14:05.500 13:34:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:05.500 13:34:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:14:05.500 13:34:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.500 13:34:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.500 13:34:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.500 13:34:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:05.500 13:34:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:14:05.501 13:34:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.501 13:34:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.501 13:34:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.501 13:34:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:05.501 13:34:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:14:05.501 13:34:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.501 13:34:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.501 13:34:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.501 13:34:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:14:05.501 13:34:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:14:05.501 13:34:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.501 13:34:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.501 13:34:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.501 13:34:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:14:05.501 13:34:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:14:05.501 13:34:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:14:05.501 13:34:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:14:05.501 13:34:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:14:05.501 13:34:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:05.501 13:34:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:14:05.501 13:34:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:05.501 13:34:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:14:05.501 13:34:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.501 13:34:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.501 [2024-11-20 13:34:04.849004] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:14:05.501 [2024-11-20 13:34:04.851202] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:14:05.501 [2024-11-20 13:34:04.851262] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:14:05.501 [2024-11-20 13:34:04.851318] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:14:05.501 [2024-11-20 13:34:04.851384] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:14:05.501 [2024-11-20 13:34:04.851406] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:14:05.501 [2024-11-20 13:34:04.851443] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:05.501 [2024-11-20 13:34:04.851457] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:14:05.501 request: 00:14:05.501 { 00:14:05.501 "name": "raid_bdev1", 00:14:05.501 "raid_level": "raid0", 00:14:05.501 "base_bdevs": [ 00:14:05.501 "malloc1", 00:14:05.501 "malloc2", 00:14:05.501 "malloc3" 00:14:05.501 ], 00:14:05.501 "strip_size_kb": 64, 00:14:05.501 "superblock": false, 00:14:05.501 "method": "bdev_raid_create", 00:14:05.501 "req_id": 1 00:14:05.501 } 00:14:05.501 Got JSON-RPC error response 00:14:05.501 response: 00:14:05.501 { 00:14:05.501 "code": -17, 00:14:05.501 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:14:05.501 } 00:14:05.501 13:34:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:14:05.501 13:34:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:14:05.501 13:34:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:05.501 13:34:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:05.501 13:34:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:05.501 13:34:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:14:05.501 13:34:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:05.501 13:34:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.501 13:34:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.501 13:34:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.501 13:34:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:14:05.501 13:34:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:14:05.501 13:34:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:05.501 13:34:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.501 13:34:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.501 [2024-11-20 13:34:04.900827] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:05.501 [2024-11-20 13:34:04.900886] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:05.501 [2024-11-20 13:34:04.900907] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:14:05.501 [2024-11-20 13:34:04.900920] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:05.501 [2024-11-20 13:34:04.903434] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:05.501 [2024-11-20 13:34:04.903477] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:05.501 [2024-11-20 13:34:04.903565] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:14:05.501 [2024-11-20 13:34:04.903624] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:05.501 pt1 00:14:05.501 13:34:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.501 13:34:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:14:05.501 13:34:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:05.501 13:34:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:05.501 13:34:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:05.501 13:34:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:05.501 13:34:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:05.501 13:34:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:05.501 13:34:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:05.501 13:34:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:05.501 13:34:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:05.501 13:34:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:05.501 13:34:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:05.501 13:34:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.501 13:34:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.501 13:34:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.501 13:34:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:05.501 "name": "raid_bdev1", 00:14:05.501 "uuid": "a6bad519-fd00-443b-b3c8-e0bbef079372", 00:14:05.501 "strip_size_kb": 64, 00:14:05.501 "state": "configuring", 00:14:05.501 "raid_level": "raid0", 00:14:05.501 "superblock": true, 00:14:05.501 "num_base_bdevs": 3, 00:14:05.501 "num_base_bdevs_discovered": 1, 00:14:05.501 "num_base_bdevs_operational": 3, 00:14:05.501 "base_bdevs_list": [ 00:14:05.501 { 00:14:05.501 "name": "pt1", 00:14:05.501 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:05.501 "is_configured": true, 00:14:05.501 "data_offset": 2048, 00:14:05.501 "data_size": 63488 00:14:05.501 }, 00:14:05.501 { 00:14:05.501 "name": null, 00:14:05.501 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:05.501 "is_configured": false, 00:14:05.501 "data_offset": 2048, 00:14:05.501 "data_size": 63488 00:14:05.501 }, 00:14:05.501 { 00:14:05.501 "name": null, 00:14:05.502 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:05.502 "is_configured": false, 00:14:05.502 "data_offset": 2048, 00:14:05.502 "data_size": 63488 00:14:05.502 } 00:14:05.502 ] 00:14:05.502 }' 00:14:05.502 13:34:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:05.502 13:34:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.070 13:34:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:14:06.070 13:34:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:06.070 13:34:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.070 13:34:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.070 [2024-11-20 13:34:05.312264] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:06.070 [2024-11-20 13:34:05.312335] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:06.070 [2024-11-20 13:34:05.312364] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:14:06.070 [2024-11-20 13:34:05.312376] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:06.070 [2024-11-20 13:34:05.312809] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:06.070 [2024-11-20 13:34:05.312836] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:06.070 [2024-11-20 13:34:05.312925] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:14:06.070 [2024-11-20 13:34:05.312954] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:06.070 pt2 00:14:06.070 13:34:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.070 13:34:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:14:06.070 13:34:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.070 13:34:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.070 [2024-11-20 13:34:05.320242] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:14:06.070 13:34:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.070 13:34:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:14:06.070 13:34:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:06.070 13:34:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:06.070 13:34:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:06.070 13:34:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:06.070 13:34:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:06.070 13:34:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:06.070 13:34:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:06.070 13:34:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:06.070 13:34:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:06.070 13:34:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:06.070 13:34:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:06.070 13:34:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.070 13:34:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.070 13:34:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.070 13:34:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:06.070 "name": "raid_bdev1", 00:14:06.070 "uuid": "a6bad519-fd00-443b-b3c8-e0bbef079372", 00:14:06.070 "strip_size_kb": 64, 00:14:06.070 "state": "configuring", 00:14:06.070 "raid_level": "raid0", 00:14:06.070 "superblock": true, 00:14:06.070 "num_base_bdevs": 3, 00:14:06.070 "num_base_bdevs_discovered": 1, 00:14:06.070 "num_base_bdevs_operational": 3, 00:14:06.070 "base_bdevs_list": [ 00:14:06.070 { 00:14:06.070 "name": "pt1", 00:14:06.070 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:06.070 "is_configured": true, 00:14:06.070 "data_offset": 2048, 00:14:06.070 "data_size": 63488 00:14:06.070 }, 00:14:06.070 { 00:14:06.070 "name": null, 00:14:06.070 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:06.070 "is_configured": false, 00:14:06.070 "data_offset": 0, 00:14:06.070 "data_size": 63488 00:14:06.070 }, 00:14:06.070 { 00:14:06.070 "name": null, 00:14:06.070 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:06.070 "is_configured": false, 00:14:06.070 "data_offset": 2048, 00:14:06.070 "data_size": 63488 00:14:06.070 } 00:14:06.070 ] 00:14:06.070 }' 00:14:06.070 13:34:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:06.070 13:34:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.330 13:34:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:14:06.330 13:34:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:06.330 13:34:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:06.330 13:34:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.330 13:34:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.330 [2024-11-20 13:34:05.739820] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:06.330 [2024-11-20 13:34:05.739897] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:06.330 [2024-11-20 13:34:05.739918] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:14:06.330 [2024-11-20 13:34:05.739932] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:06.330 [2024-11-20 13:34:05.740409] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:06.330 [2024-11-20 13:34:05.740440] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:06.330 [2024-11-20 13:34:05.740526] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:14:06.330 [2024-11-20 13:34:05.740551] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:06.330 pt2 00:14:06.330 13:34:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.330 13:34:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:14:06.330 13:34:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:06.330 13:34:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:06.330 13:34:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.330 13:34:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.330 [2024-11-20 13:34:05.747794] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:06.330 [2024-11-20 13:34:05.747856] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:06.330 [2024-11-20 13:34:05.747873] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:14:06.330 [2024-11-20 13:34:05.747886] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:06.330 [2024-11-20 13:34:05.748300] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:06.330 [2024-11-20 13:34:05.748333] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:06.330 [2024-11-20 13:34:05.748403] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:14:06.330 [2024-11-20 13:34:05.748426] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:06.330 [2024-11-20 13:34:05.748536] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:14:06.330 [2024-11-20 13:34:05.748550] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:14:06.330 [2024-11-20 13:34:05.748813] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:14:06.330 [2024-11-20 13:34:05.748978] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:14:06.330 [2024-11-20 13:34:05.748987] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:14:06.330 [2024-11-20 13:34:05.749141] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:06.330 pt3 00:14:06.330 13:34:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.330 13:34:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:14:06.330 13:34:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:06.330 13:34:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:14:06.330 13:34:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:06.330 13:34:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:06.330 13:34:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:06.331 13:34:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:06.331 13:34:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:06.331 13:34:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:06.331 13:34:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:06.331 13:34:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:06.331 13:34:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:06.331 13:34:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:06.331 13:34:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.331 13:34:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:06.331 13:34:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.331 13:34:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.331 13:34:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:06.331 "name": "raid_bdev1", 00:14:06.331 "uuid": "a6bad519-fd00-443b-b3c8-e0bbef079372", 00:14:06.331 "strip_size_kb": 64, 00:14:06.331 "state": "online", 00:14:06.331 "raid_level": "raid0", 00:14:06.331 "superblock": true, 00:14:06.331 "num_base_bdevs": 3, 00:14:06.331 "num_base_bdevs_discovered": 3, 00:14:06.331 "num_base_bdevs_operational": 3, 00:14:06.331 "base_bdevs_list": [ 00:14:06.331 { 00:14:06.331 "name": "pt1", 00:14:06.331 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:06.331 "is_configured": true, 00:14:06.331 "data_offset": 2048, 00:14:06.331 "data_size": 63488 00:14:06.331 }, 00:14:06.331 { 00:14:06.331 "name": "pt2", 00:14:06.331 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:06.331 "is_configured": true, 00:14:06.331 "data_offset": 2048, 00:14:06.331 "data_size": 63488 00:14:06.331 }, 00:14:06.331 { 00:14:06.331 "name": "pt3", 00:14:06.331 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:06.331 "is_configured": true, 00:14:06.331 "data_offset": 2048, 00:14:06.331 "data_size": 63488 00:14:06.331 } 00:14:06.331 ] 00:14:06.331 }' 00:14:06.331 13:34:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:06.331 13:34:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.900 13:34:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:14:06.900 13:34:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:14:06.900 13:34:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:06.900 13:34:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:06.900 13:34:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:06.900 13:34:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:06.900 13:34:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:06.900 13:34:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.900 13:34:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.900 13:34:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:06.900 [2024-11-20 13:34:06.127587] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:06.900 13:34:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.900 13:34:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:06.900 "name": "raid_bdev1", 00:14:06.900 "aliases": [ 00:14:06.901 "a6bad519-fd00-443b-b3c8-e0bbef079372" 00:14:06.901 ], 00:14:06.901 "product_name": "Raid Volume", 00:14:06.901 "block_size": 512, 00:14:06.901 "num_blocks": 190464, 00:14:06.901 "uuid": "a6bad519-fd00-443b-b3c8-e0bbef079372", 00:14:06.901 "assigned_rate_limits": { 00:14:06.901 "rw_ios_per_sec": 0, 00:14:06.901 "rw_mbytes_per_sec": 0, 00:14:06.901 "r_mbytes_per_sec": 0, 00:14:06.901 "w_mbytes_per_sec": 0 00:14:06.901 }, 00:14:06.901 "claimed": false, 00:14:06.901 "zoned": false, 00:14:06.901 "supported_io_types": { 00:14:06.901 "read": true, 00:14:06.901 "write": true, 00:14:06.901 "unmap": true, 00:14:06.901 "flush": true, 00:14:06.901 "reset": true, 00:14:06.901 "nvme_admin": false, 00:14:06.901 "nvme_io": false, 00:14:06.901 "nvme_io_md": false, 00:14:06.901 "write_zeroes": true, 00:14:06.901 "zcopy": false, 00:14:06.901 "get_zone_info": false, 00:14:06.901 "zone_management": false, 00:14:06.901 "zone_append": false, 00:14:06.901 "compare": false, 00:14:06.901 "compare_and_write": false, 00:14:06.901 "abort": false, 00:14:06.901 "seek_hole": false, 00:14:06.901 "seek_data": false, 00:14:06.901 "copy": false, 00:14:06.901 "nvme_iov_md": false 00:14:06.901 }, 00:14:06.901 "memory_domains": [ 00:14:06.901 { 00:14:06.901 "dma_device_id": "system", 00:14:06.901 "dma_device_type": 1 00:14:06.901 }, 00:14:06.901 { 00:14:06.901 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:06.901 "dma_device_type": 2 00:14:06.901 }, 00:14:06.901 { 00:14:06.901 "dma_device_id": "system", 00:14:06.901 "dma_device_type": 1 00:14:06.901 }, 00:14:06.901 { 00:14:06.901 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:06.901 "dma_device_type": 2 00:14:06.901 }, 00:14:06.901 { 00:14:06.901 "dma_device_id": "system", 00:14:06.901 "dma_device_type": 1 00:14:06.901 }, 00:14:06.901 { 00:14:06.901 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:06.901 "dma_device_type": 2 00:14:06.901 } 00:14:06.901 ], 00:14:06.901 "driver_specific": { 00:14:06.901 "raid": { 00:14:06.901 "uuid": "a6bad519-fd00-443b-b3c8-e0bbef079372", 00:14:06.901 "strip_size_kb": 64, 00:14:06.901 "state": "online", 00:14:06.901 "raid_level": "raid0", 00:14:06.901 "superblock": true, 00:14:06.901 "num_base_bdevs": 3, 00:14:06.901 "num_base_bdevs_discovered": 3, 00:14:06.901 "num_base_bdevs_operational": 3, 00:14:06.901 "base_bdevs_list": [ 00:14:06.901 { 00:14:06.901 "name": "pt1", 00:14:06.901 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:06.901 "is_configured": true, 00:14:06.901 "data_offset": 2048, 00:14:06.901 "data_size": 63488 00:14:06.901 }, 00:14:06.901 { 00:14:06.901 "name": "pt2", 00:14:06.901 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:06.901 "is_configured": true, 00:14:06.901 "data_offset": 2048, 00:14:06.901 "data_size": 63488 00:14:06.901 }, 00:14:06.901 { 00:14:06.901 "name": "pt3", 00:14:06.901 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:06.901 "is_configured": true, 00:14:06.901 "data_offset": 2048, 00:14:06.901 "data_size": 63488 00:14:06.901 } 00:14:06.901 ] 00:14:06.901 } 00:14:06.901 } 00:14:06.901 }' 00:14:06.901 13:34:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:06.901 13:34:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:14:06.901 pt2 00:14:06.901 pt3' 00:14:06.901 13:34:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:06.901 13:34:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:06.901 13:34:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:06.901 13:34:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:14:06.901 13:34:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.901 13:34:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.901 13:34:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:06.901 13:34:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.901 13:34:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:06.901 13:34:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:06.901 13:34:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:06.901 13:34:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:14:06.901 13:34:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.901 13:34:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.901 13:34:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:06.901 13:34:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.901 13:34:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:06.901 13:34:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:06.901 13:34:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:06.901 13:34:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:14:06.901 13:34:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:06.901 13:34:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.901 13:34:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.901 13:34:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.901 13:34:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:06.901 13:34:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:06.901 13:34:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:14:06.901 13:34:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:06.901 13:34:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.901 13:34:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.901 [2024-11-20 13:34:06.371451] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:07.161 13:34:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.161 13:34:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' a6bad519-fd00-443b-b3c8-e0bbef079372 '!=' a6bad519-fd00-443b-b3c8-e0bbef079372 ']' 00:14:07.161 13:34:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:14:07.161 13:34:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:07.161 13:34:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:14:07.161 13:34:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 64824 00:14:07.161 13:34:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 64824 ']' 00:14:07.161 13:34:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 64824 00:14:07.161 13:34:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:14:07.161 13:34:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:07.161 13:34:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64824 00:14:07.161 13:34:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:07.161 13:34:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:07.161 13:34:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64824' 00:14:07.161 killing process with pid 64824 00:14:07.161 13:34:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 64824 00:14:07.161 [2024-11-20 13:34:06.445915] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:07.161 [2024-11-20 13:34:06.446145] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:07.161 13:34:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 64824 00:14:07.161 [2024-11-20 13:34:06.446355] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:07.161 [2024-11-20 13:34:06.446381] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:14:07.420 [2024-11-20 13:34:06.751300] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:08.800 13:34:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:14:08.800 00:14:08.800 real 0m5.027s 00:14:08.800 user 0m7.118s 00:14:08.800 sys 0m1.019s 00:14:08.800 ************************************ 00:14:08.800 END TEST raid_superblock_test 00:14:08.800 ************************************ 00:14:08.800 13:34:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:08.800 13:34:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:08.800 13:34:07 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 3 read 00:14:08.800 13:34:07 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:14:08.800 13:34:07 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:08.800 13:34:07 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:08.800 ************************************ 00:14:08.800 START TEST raid_read_error_test 00:14:08.800 ************************************ 00:14:08.800 13:34:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 3 read 00:14:08.800 13:34:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:14:08.800 13:34:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:14:08.800 13:34:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:14:08.800 13:34:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:14:08.800 13:34:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:08.800 13:34:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:14:08.800 13:34:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:08.800 13:34:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:08.800 13:34:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:14:08.800 13:34:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:08.800 13:34:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:08.800 13:34:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:14:08.800 13:34:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:08.800 13:34:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:08.800 13:34:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:14:08.800 13:34:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:14:08.800 13:34:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:14:08.800 13:34:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:14:08.800 13:34:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:14:08.800 13:34:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:14:08.800 13:34:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:14:08.800 13:34:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:14:08.800 13:34:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:14:08.800 13:34:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:14:08.800 13:34:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:14:08.800 13:34:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.1pzy5BYW8v 00:14:08.800 13:34:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=65072 00:14:08.800 13:34:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 65072 00:14:08.800 13:34:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:14:08.800 13:34:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 65072 ']' 00:14:08.800 13:34:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:08.800 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:08.800 13:34:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:08.800 13:34:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:08.800 13:34:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:08.800 13:34:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:08.800 [2024-11-20 13:34:08.058794] Starting SPDK v25.01-pre git sha1 82b85d9ca / DPDK 24.03.0 initialization... 00:14:08.800 [2024-11-20 13:34:08.059397] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65072 ] 00:14:08.800 [2024-11-20 13:34:08.239295] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:09.060 [2024-11-20 13:34:08.355763] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:09.319 [2024-11-20 13:34:08.579154] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:09.319 [2024-11-20 13:34:08.579219] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:09.579 13:34:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:09.579 13:34:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:14:09.579 13:34:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:09.579 13:34:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:09.579 13:34:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.579 13:34:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:09.579 BaseBdev1_malloc 00:14:09.579 13:34:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.579 13:34:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:14:09.579 13:34:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.579 13:34:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:09.579 true 00:14:09.579 13:34:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.579 13:34:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:14:09.579 13:34:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.579 13:34:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:09.579 [2024-11-20 13:34:08.940144] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:14:09.579 [2024-11-20 13:34:08.940202] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:09.579 [2024-11-20 13:34:08.940225] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:14:09.579 [2024-11-20 13:34:08.940239] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:09.579 [2024-11-20 13:34:08.942589] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:09.579 [2024-11-20 13:34:08.942637] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:09.579 BaseBdev1 00:14:09.579 13:34:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.579 13:34:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:09.579 13:34:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:09.579 13:34:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.579 13:34:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:09.579 BaseBdev2_malloc 00:14:09.579 13:34:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.579 13:34:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:14:09.579 13:34:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.579 13:34:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:09.579 true 00:14:09.579 13:34:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.579 13:34:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:14:09.579 13:34:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.579 13:34:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:09.579 [2024-11-20 13:34:08.999757] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:14:09.579 [2024-11-20 13:34:08.999817] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:09.579 [2024-11-20 13:34:08.999837] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:14:09.579 [2024-11-20 13:34:08.999851] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:09.579 [2024-11-20 13:34:09.002174] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:09.579 [2024-11-20 13:34:09.002213] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:09.580 BaseBdev2 00:14:09.580 13:34:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.580 13:34:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:09.580 13:34:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:14:09.580 13:34:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.580 13:34:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:09.580 BaseBdev3_malloc 00:14:09.580 13:34:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.580 13:34:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:14:09.580 13:34:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.580 13:34:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:09.839 true 00:14:09.839 13:34:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.839 13:34:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:14:09.839 13:34:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.839 13:34:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:09.839 [2024-11-20 13:34:09.065649] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:14:09.839 [2024-11-20 13:34:09.065706] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:09.839 [2024-11-20 13:34:09.065725] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:14:09.839 [2024-11-20 13:34:09.065739] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:09.839 [2024-11-20 13:34:09.068075] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:09.839 [2024-11-20 13:34:09.068115] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:14:09.839 BaseBdev3 00:14:09.839 13:34:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.840 13:34:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:14:09.840 13:34:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.840 13:34:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:09.840 [2024-11-20 13:34:09.073717] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:09.840 [2024-11-20 13:34:09.075872] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:09.840 [2024-11-20 13:34:09.076069] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:09.840 [2024-11-20 13:34:09.076380] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:14:09.840 [2024-11-20 13:34:09.076489] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:14:09.840 [2024-11-20 13:34:09.076784] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:14:09.840 [2024-11-20 13:34:09.076977] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:14:09.840 [2024-11-20 13:34:09.077022] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:14:09.840 [2024-11-20 13:34:09.077330] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:09.840 13:34:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.840 13:34:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:14:09.840 13:34:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:09.840 13:34:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:09.840 13:34:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:09.840 13:34:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:09.840 13:34:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:09.840 13:34:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:09.840 13:34:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:09.840 13:34:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:09.840 13:34:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:09.840 13:34:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:09.840 13:34:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:09.840 13:34:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.840 13:34:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:09.840 13:34:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.840 13:34:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:09.840 "name": "raid_bdev1", 00:14:09.840 "uuid": "84ff680c-559d-469a-b056-71d54132e000", 00:14:09.840 "strip_size_kb": 64, 00:14:09.840 "state": "online", 00:14:09.840 "raid_level": "raid0", 00:14:09.840 "superblock": true, 00:14:09.840 "num_base_bdevs": 3, 00:14:09.840 "num_base_bdevs_discovered": 3, 00:14:09.840 "num_base_bdevs_operational": 3, 00:14:09.840 "base_bdevs_list": [ 00:14:09.840 { 00:14:09.840 "name": "BaseBdev1", 00:14:09.840 "uuid": "2e764b34-3253-5ee7-a962-332922016cb4", 00:14:09.840 "is_configured": true, 00:14:09.840 "data_offset": 2048, 00:14:09.840 "data_size": 63488 00:14:09.840 }, 00:14:09.840 { 00:14:09.840 "name": "BaseBdev2", 00:14:09.840 "uuid": "fb66f257-0e09-5291-8abe-5bd42cbe16d3", 00:14:09.840 "is_configured": true, 00:14:09.840 "data_offset": 2048, 00:14:09.840 "data_size": 63488 00:14:09.840 }, 00:14:09.840 { 00:14:09.840 "name": "BaseBdev3", 00:14:09.840 "uuid": "9466fc70-717e-5d58-90d7-df743c9e14b8", 00:14:09.840 "is_configured": true, 00:14:09.840 "data_offset": 2048, 00:14:09.840 "data_size": 63488 00:14:09.840 } 00:14:09.840 ] 00:14:09.840 }' 00:14:09.840 13:34:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:09.840 13:34:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.099 13:34:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:14:10.099 13:34:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:14:10.358 [2024-11-20 13:34:09.586431] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:14:11.296 13:34:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:14:11.296 13:34:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.296 13:34:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:11.296 13:34:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.296 13:34:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:14:11.296 13:34:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:14:11.296 13:34:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:14:11.296 13:34:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:14:11.296 13:34:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:11.296 13:34:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:11.296 13:34:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:11.296 13:34:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:11.296 13:34:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:11.296 13:34:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:11.296 13:34:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:11.296 13:34:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:11.296 13:34:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:11.296 13:34:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:11.296 13:34:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:11.296 13:34:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.296 13:34:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:11.296 13:34:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.296 13:34:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:11.296 "name": "raid_bdev1", 00:14:11.296 "uuid": "84ff680c-559d-469a-b056-71d54132e000", 00:14:11.296 "strip_size_kb": 64, 00:14:11.296 "state": "online", 00:14:11.296 "raid_level": "raid0", 00:14:11.296 "superblock": true, 00:14:11.296 "num_base_bdevs": 3, 00:14:11.296 "num_base_bdevs_discovered": 3, 00:14:11.296 "num_base_bdevs_operational": 3, 00:14:11.296 "base_bdevs_list": [ 00:14:11.296 { 00:14:11.296 "name": "BaseBdev1", 00:14:11.296 "uuid": "2e764b34-3253-5ee7-a962-332922016cb4", 00:14:11.296 "is_configured": true, 00:14:11.296 "data_offset": 2048, 00:14:11.296 "data_size": 63488 00:14:11.296 }, 00:14:11.296 { 00:14:11.296 "name": "BaseBdev2", 00:14:11.296 "uuid": "fb66f257-0e09-5291-8abe-5bd42cbe16d3", 00:14:11.296 "is_configured": true, 00:14:11.296 "data_offset": 2048, 00:14:11.296 "data_size": 63488 00:14:11.296 }, 00:14:11.296 { 00:14:11.296 "name": "BaseBdev3", 00:14:11.296 "uuid": "9466fc70-717e-5d58-90d7-df743c9e14b8", 00:14:11.296 "is_configured": true, 00:14:11.296 "data_offset": 2048, 00:14:11.296 "data_size": 63488 00:14:11.296 } 00:14:11.296 ] 00:14:11.296 }' 00:14:11.296 13:34:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:11.296 13:34:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:11.556 13:34:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:11.556 13:34:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.556 13:34:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:11.556 [2024-11-20 13:34:10.934621] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:11.556 [2024-11-20 13:34:10.934807] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:11.556 [2024-11-20 13:34:10.937473] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:11.556 [2024-11-20 13:34:10.937511] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:11.556 [2024-11-20 13:34:10.937549] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:11.556 [2024-11-20 13:34:10.937560] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:14:11.556 { 00:14:11.556 "results": [ 00:14:11.556 { 00:14:11.556 "job": "raid_bdev1", 00:14:11.556 "core_mask": "0x1", 00:14:11.556 "workload": "randrw", 00:14:11.556 "percentage": 50, 00:14:11.556 "status": "finished", 00:14:11.556 "queue_depth": 1, 00:14:11.556 "io_size": 131072, 00:14:11.556 "runtime": 1.34864, 00:14:11.556 "iops": 16198.540752165145, 00:14:11.556 "mibps": 2024.8175940206431, 00:14:11.556 "io_failed": 1, 00:14:11.556 "io_timeout": 0, 00:14:11.556 "avg_latency_us": 85.21258765091952, 00:14:11.556 "min_latency_us": 26.730923694779115, 00:14:11.556 "max_latency_us": 1414.6827309236949 00:14:11.556 } 00:14:11.556 ], 00:14:11.556 "core_count": 1 00:14:11.556 } 00:14:11.556 13:34:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.556 13:34:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 65072 00:14:11.556 13:34:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 65072 ']' 00:14:11.556 13:34:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 65072 00:14:11.556 13:34:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:14:11.556 13:34:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:11.556 13:34:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65072 00:14:11.556 13:34:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:11.556 killing process with pid 65072 00:14:11.556 13:34:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:11.556 13:34:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65072' 00:14:11.556 13:34:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 65072 00:14:11.556 [2024-11-20 13:34:10.986359] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:11.556 13:34:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 65072 00:14:11.815 [2024-11-20 13:34:11.218496] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:13.193 13:34:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:14:13.193 13:34:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.1pzy5BYW8v 00:14:13.193 13:34:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:14:13.193 13:34:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.74 00:14:13.193 13:34:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:14:13.193 ************************************ 00:14:13.193 END TEST raid_read_error_test 00:14:13.193 ************************************ 00:14:13.194 13:34:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:13.194 13:34:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:14:13.194 13:34:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.74 != \0\.\0\0 ]] 00:14:13.194 00:14:13.194 real 0m4.554s 00:14:13.194 user 0m5.331s 00:14:13.194 sys 0m0.611s 00:14:13.194 13:34:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:13.194 13:34:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:13.194 13:34:12 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 3 write 00:14:13.194 13:34:12 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:14:13.194 13:34:12 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:13.194 13:34:12 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:13.194 ************************************ 00:14:13.194 START TEST raid_write_error_test 00:14:13.194 ************************************ 00:14:13.194 13:34:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 3 write 00:14:13.194 13:34:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:14:13.194 13:34:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:14:13.194 13:34:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:14:13.194 13:34:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:14:13.194 13:34:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:13.194 13:34:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:14:13.194 13:34:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:13.194 13:34:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:13.194 13:34:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:14:13.194 13:34:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:13.194 13:34:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:13.194 13:34:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:14:13.194 13:34:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:13.194 13:34:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:13.194 13:34:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:14:13.194 13:34:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:14:13.194 13:34:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:14:13.194 13:34:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:14:13.194 13:34:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:14:13.194 13:34:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:14:13.194 13:34:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:14:13.194 13:34:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:14:13.194 13:34:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:14:13.194 13:34:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:14:13.194 13:34:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:14:13.194 13:34:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.l9OJUsBUyi 00:14:13.194 13:34:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=65224 00:14:13.194 13:34:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:14:13.194 13:34:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 65224 00:14:13.194 13:34:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 65224 ']' 00:14:13.194 13:34:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:13.194 13:34:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:13.194 13:34:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:13.194 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:13.194 13:34:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:13.194 13:34:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:13.453 [2024-11-20 13:34:12.721314] Starting SPDK v25.01-pre git sha1 82b85d9ca / DPDK 24.03.0 initialization... 00:14:13.453 [2024-11-20 13:34:12.721725] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65224 ] 00:14:13.453 [2024-11-20 13:34:12.910403] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:13.712 [2024-11-20 13:34:13.039649] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:13.970 [2024-11-20 13:34:13.272600] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:13.970 [2024-11-20 13:34:13.272879] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:14.229 13:34:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:14.229 13:34:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:14:14.229 13:34:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:14.229 13:34:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:14.229 13:34:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.229 13:34:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.229 BaseBdev1_malloc 00:14:14.229 13:34:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.229 13:34:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:14:14.229 13:34:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.229 13:34:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.488 true 00:14:14.488 13:34:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.488 13:34:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:14:14.488 13:34:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.488 13:34:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.489 [2024-11-20 13:34:13.719330] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:14:14.489 [2024-11-20 13:34:13.719398] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:14.489 [2024-11-20 13:34:13.719429] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:14:14.489 [2024-11-20 13:34:13.719448] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:14.489 [2024-11-20 13:34:13.722118] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:14.489 [2024-11-20 13:34:13.722333] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:14.489 BaseBdev1 00:14:14.489 13:34:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.489 13:34:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:14.489 13:34:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:14.489 13:34:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.489 13:34:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.489 BaseBdev2_malloc 00:14:14.489 13:34:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.489 13:34:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:14:14.489 13:34:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.489 13:34:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.489 true 00:14:14.489 13:34:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.489 13:34:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:14:14.489 13:34:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.489 13:34:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.489 [2024-11-20 13:34:13.782642] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:14:14.489 [2024-11-20 13:34:13.782709] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:14.489 [2024-11-20 13:34:13.782730] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:14:14.489 [2024-11-20 13:34:13.782745] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:14.489 [2024-11-20 13:34:13.785364] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:14.489 [2024-11-20 13:34:13.785577] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:14.489 BaseBdev2 00:14:14.489 13:34:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.489 13:34:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:14.489 13:34:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:14:14.489 13:34:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.489 13:34:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.489 BaseBdev3_malloc 00:14:14.489 13:34:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.489 13:34:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:14:14.489 13:34:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.489 13:34:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.489 true 00:14:14.489 13:34:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.489 13:34:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:14:14.489 13:34:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.489 13:34:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.489 [2024-11-20 13:34:13.854570] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:14:14.489 [2024-11-20 13:34:13.854632] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:14.489 [2024-11-20 13:34:13.854655] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:14:14.489 [2024-11-20 13:34:13.854670] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:14.489 [2024-11-20 13:34:13.857272] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:14.489 [2024-11-20 13:34:13.857319] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:14:14.489 BaseBdev3 00:14:14.489 13:34:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.489 13:34:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:14:14.489 13:34:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.489 13:34:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.489 [2024-11-20 13:34:13.862638] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:14.489 [2024-11-20 13:34:13.864992] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:14.489 [2024-11-20 13:34:13.865242] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:14.489 [2024-11-20 13:34:13.865468] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:14:14.489 [2024-11-20 13:34:13.865487] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:14:14.489 [2024-11-20 13:34:13.865838] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:14:14.489 [2024-11-20 13:34:13.866028] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:14:14.489 [2024-11-20 13:34:13.866049] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:14:14.489 [2024-11-20 13:34:13.866234] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:14.489 13:34:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.489 13:34:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:14:14.489 13:34:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:14.489 13:34:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:14.489 13:34:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:14.489 13:34:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:14.489 13:34:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:14.489 13:34:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:14.489 13:34:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:14.489 13:34:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:14.489 13:34:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:14.489 13:34:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:14.489 13:34:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:14.489 13:34:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.489 13:34:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.489 13:34:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.489 13:34:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:14.489 "name": "raid_bdev1", 00:14:14.489 "uuid": "5ca3bf50-190f-477e-aaa8-7e819be60271", 00:14:14.489 "strip_size_kb": 64, 00:14:14.489 "state": "online", 00:14:14.489 "raid_level": "raid0", 00:14:14.489 "superblock": true, 00:14:14.489 "num_base_bdevs": 3, 00:14:14.489 "num_base_bdevs_discovered": 3, 00:14:14.489 "num_base_bdevs_operational": 3, 00:14:14.489 "base_bdevs_list": [ 00:14:14.489 { 00:14:14.489 "name": "BaseBdev1", 00:14:14.489 "uuid": "d3dd9c26-bd37-5cc1-b35b-89eb41c1d09d", 00:14:14.489 "is_configured": true, 00:14:14.489 "data_offset": 2048, 00:14:14.489 "data_size": 63488 00:14:14.489 }, 00:14:14.489 { 00:14:14.489 "name": "BaseBdev2", 00:14:14.489 "uuid": "63982693-4b07-5f6e-9764-1b3500176c86", 00:14:14.489 "is_configured": true, 00:14:14.489 "data_offset": 2048, 00:14:14.489 "data_size": 63488 00:14:14.489 }, 00:14:14.489 { 00:14:14.489 "name": "BaseBdev3", 00:14:14.489 "uuid": "3d699039-703a-5d5e-a439-dd691c401b0c", 00:14:14.489 "is_configured": true, 00:14:14.489 "data_offset": 2048, 00:14:14.489 "data_size": 63488 00:14:14.489 } 00:14:14.489 ] 00:14:14.489 }' 00:14:14.489 13:34:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:14.489 13:34:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.081 13:34:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:14:15.082 13:34:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:14:15.082 [2024-11-20 13:34:14.472001] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:14:16.020 13:34:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:14:16.020 13:34:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.020 13:34:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.020 13:34:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.020 13:34:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:14:16.020 13:34:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:14:16.020 13:34:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:14:16.020 13:34:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:14:16.020 13:34:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:16.020 13:34:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:16.020 13:34:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:16.020 13:34:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:16.020 13:34:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:16.020 13:34:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:16.020 13:34:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:16.020 13:34:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:16.020 13:34:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:16.020 13:34:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:16.020 13:34:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:16.020 13:34:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.020 13:34:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.020 13:34:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.020 13:34:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:16.020 "name": "raid_bdev1", 00:14:16.020 "uuid": "5ca3bf50-190f-477e-aaa8-7e819be60271", 00:14:16.020 "strip_size_kb": 64, 00:14:16.020 "state": "online", 00:14:16.020 "raid_level": "raid0", 00:14:16.020 "superblock": true, 00:14:16.020 "num_base_bdevs": 3, 00:14:16.020 "num_base_bdevs_discovered": 3, 00:14:16.020 "num_base_bdevs_operational": 3, 00:14:16.020 "base_bdevs_list": [ 00:14:16.020 { 00:14:16.020 "name": "BaseBdev1", 00:14:16.020 "uuid": "d3dd9c26-bd37-5cc1-b35b-89eb41c1d09d", 00:14:16.020 "is_configured": true, 00:14:16.020 "data_offset": 2048, 00:14:16.020 "data_size": 63488 00:14:16.020 }, 00:14:16.020 { 00:14:16.020 "name": "BaseBdev2", 00:14:16.020 "uuid": "63982693-4b07-5f6e-9764-1b3500176c86", 00:14:16.020 "is_configured": true, 00:14:16.020 "data_offset": 2048, 00:14:16.020 "data_size": 63488 00:14:16.020 }, 00:14:16.020 { 00:14:16.020 "name": "BaseBdev3", 00:14:16.020 "uuid": "3d699039-703a-5d5e-a439-dd691c401b0c", 00:14:16.020 "is_configured": true, 00:14:16.020 "data_offset": 2048, 00:14:16.020 "data_size": 63488 00:14:16.020 } 00:14:16.020 ] 00:14:16.020 }' 00:14:16.020 13:34:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:16.020 13:34:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.588 13:34:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:16.588 13:34:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.588 13:34:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.588 [2024-11-20 13:34:15.817041] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:16.589 [2024-11-20 13:34:15.817105] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:16.589 { 00:14:16.589 "results": [ 00:14:16.589 { 00:14:16.589 "job": "raid_bdev1", 00:14:16.589 "core_mask": "0x1", 00:14:16.589 "workload": "randrw", 00:14:16.589 "percentage": 50, 00:14:16.589 "status": "finished", 00:14:16.589 "queue_depth": 1, 00:14:16.589 "io_size": 131072, 00:14:16.589 "runtime": 1.345015, 00:14:16.589 "iops": 14299.468779158597, 00:14:16.589 "mibps": 1787.4335973948246, 00:14:16.589 "io_failed": 1, 00:14:16.589 "io_timeout": 0, 00:14:16.589 "avg_latency_us": 96.28359151485843, 00:14:16.589 "min_latency_us": 29.60963855421687, 00:14:16.589 "max_latency_us": 1644.9799196787149 00:14:16.589 } 00:14:16.589 ], 00:14:16.589 "core_count": 1 00:14:16.589 } 00:14:16.589 [2024-11-20 13:34:15.820923] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:16.589 [2024-11-20 13:34:15.821080] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:16.589 [2024-11-20 13:34:15.821159] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:16.589 [2024-11-20 13:34:15.821181] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:14:16.589 13:34:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.589 13:34:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 65224 00:14:16.589 13:34:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 65224 ']' 00:14:16.589 13:34:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 65224 00:14:16.589 13:34:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:14:16.589 13:34:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:16.589 13:34:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65224 00:14:16.589 killing process with pid 65224 00:14:16.589 13:34:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:16.589 13:34:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:16.589 13:34:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65224' 00:14:16.589 13:34:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 65224 00:14:16.589 [2024-11-20 13:34:15.875624] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:16.589 13:34:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 65224 00:14:16.848 [2024-11-20 13:34:16.128399] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:18.225 13:34:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.l9OJUsBUyi 00:14:18.225 13:34:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:14:18.225 13:34:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:14:18.225 13:34:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.74 00:14:18.225 13:34:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:14:18.225 13:34:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:18.225 13:34:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:14:18.225 13:34:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.74 != \0\.\0\0 ]] 00:14:18.225 00:14:18.225 real 0m4.881s 00:14:18.225 user 0m5.898s 00:14:18.225 sys 0m0.639s 00:14:18.225 13:34:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:18.225 13:34:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:18.225 ************************************ 00:14:18.225 END TEST raid_write_error_test 00:14:18.225 ************************************ 00:14:18.225 13:34:17 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:14:18.225 13:34:17 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 3 false 00:14:18.225 13:34:17 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:14:18.225 13:34:17 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:18.225 13:34:17 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:18.225 ************************************ 00:14:18.225 START TEST raid_state_function_test 00:14:18.225 ************************************ 00:14:18.225 13:34:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 3 false 00:14:18.225 13:34:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:14:18.225 13:34:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:14:18.225 13:34:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:14:18.225 13:34:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:14:18.225 13:34:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:14:18.225 13:34:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:18.225 13:34:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:14:18.225 13:34:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:18.225 13:34:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:18.225 13:34:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:14:18.225 13:34:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:18.225 13:34:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:18.225 13:34:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:14:18.225 13:34:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:18.225 13:34:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:18.225 13:34:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:14:18.225 13:34:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:14:18.225 13:34:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:14:18.225 13:34:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:14:18.225 13:34:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:14:18.225 13:34:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:14:18.225 13:34:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:14:18.225 13:34:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:14:18.225 13:34:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:14:18.225 13:34:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:14:18.225 13:34:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:14:18.225 13:34:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=65362 00:14:18.225 Process raid pid: 65362 00:14:18.225 13:34:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:14:18.225 13:34:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 65362' 00:14:18.225 13:34:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 65362 00:14:18.225 13:34:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 65362 ']' 00:14:18.225 13:34:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:18.225 13:34:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:18.225 13:34:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:18.225 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:18.225 13:34:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:18.225 13:34:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:18.225 [2024-11-20 13:34:17.643098] Starting SPDK v25.01-pre git sha1 82b85d9ca / DPDK 24.03.0 initialization... 00:14:18.225 [2024-11-20 13:34:17.643237] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:18.484 [2024-11-20 13:34:17.828705] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:18.484 [2024-11-20 13:34:17.961193] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:18.743 [2024-11-20 13:34:18.193438] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:18.743 [2024-11-20 13:34:18.193486] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:19.312 13:34:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:19.312 13:34:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:14:19.312 13:34:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:19.312 13:34:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.312 13:34:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.312 [2024-11-20 13:34:18.534483] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:19.312 [2024-11-20 13:34:18.534541] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:19.312 [2024-11-20 13:34:18.534559] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:19.312 [2024-11-20 13:34:18.534582] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:19.312 [2024-11-20 13:34:18.534592] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:19.312 [2024-11-20 13:34:18.534605] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:19.312 13:34:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.312 13:34:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:14:19.312 13:34:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:19.312 13:34:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:19.312 13:34:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:19.312 13:34:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:19.312 13:34:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:19.312 13:34:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:19.312 13:34:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:19.312 13:34:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:19.312 13:34:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:19.312 13:34:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:19.312 13:34:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.312 13:34:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:19.312 13:34:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.312 13:34:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.312 13:34:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:19.312 "name": "Existed_Raid", 00:14:19.312 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:19.312 "strip_size_kb": 64, 00:14:19.312 "state": "configuring", 00:14:19.312 "raid_level": "concat", 00:14:19.312 "superblock": false, 00:14:19.312 "num_base_bdevs": 3, 00:14:19.312 "num_base_bdevs_discovered": 0, 00:14:19.312 "num_base_bdevs_operational": 3, 00:14:19.312 "base_bdevs_list": [ 00:14:19.312 { 00:14:19.312 "name": "BaseBdev1", 00:14:19.312 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:19.312 "is_configured": false, 00:14:19.312 "data_offset": 0, 00:14:19.312 "data_size": 0 00:14:19.312 }, 00:14:19.312 { 00:14:19.312 "name": "BaseBdev2", 00:14:19.312 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:19.312 "is_configured": false, 00:14:19.312 "data_offset": 0, 00:14:19.312 "data_size": 0 00:14:19.312 }, 00:14:19.312 { 00:14:19.312 "name": "BaseBdev3", 00:14:19.312 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:19.312 "is_configured": false, 00:14:19.312 "data_offset": 0, 00:14:19.312 "data_size": 0 00:14:19.312 } 00:14:19.312 ] 00:14:19.312 }' 00:14:19.312 13:34:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:19.312 13:34:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.572 13:34:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:19.572 13:34:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.572 13:34:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.572 [2024-11-20 13:34:18.994448] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:19.572 [2024-11-20 13:34:18.994621] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:14:19.572 13:34:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.572 13:34:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:19.572 13:34:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.572 13:34:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.572 [2024-11-20 13:34:19.006465] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:19.572 [2024-11-20 13:34:19.006674] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:19.572 [2024-11-20 13:34:19.006897] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:19.572 [2024-11-20 13:34:19.006947] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:19.572 [2024-11-20 13:34:19.006979] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:19.572 [2024-11-20 13:34:19.006995] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:19.572 13:34:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.572 13:34:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:19.572 13:34:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.572 13:34:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.841 [2024-11-20 13:34:19.058032] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:19.841 BaseBdev1 00:14:19.841 13:34:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.841 13:34:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:14:19.841 13:34:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:14:19.841 13:34:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:19.841 13:34:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:19.841 13:34:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:19.841 13:34:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:19.842 13:34:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:19.842 13:34:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.842 13:34:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.842 13:34:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.842 13:34:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:19.842 13:34:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.842 13:34:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.842 [ 00:14:19.842 { 00:14:19.842 "name": "BaseBdev1", 00:14:19.842 "aliases": [ 00:14:19.842 "8a75ad71-bcc0-4bd0-8c6a-9e3a0c594b10" 00:14:19.842 ], 00:14:19.842 "product_name": "Malloc disk", 00:14:19.842 "block_size": 512, 00:14:19.842 "num_blocks": 65536, 00:14:19.842 "uuid": "8a75ad71-bcc0-4bd0-8c6a-9e3a0c594b10", 00:14:19.842 "assigned_rate_limits": { 00:14:19.842 "rw_ios_per_sec": 0, 00:14:19.842 "rw_mbytes_per_sec": 0, 00:14:19.842 "r_mbytes_per_sec": 0, 00:14:19.842 "w_mbytes_per_sec": 0 00:14:19.842 }, 00:14:19.842 "claimed": true, 00:14:19.842 "claim_type": "exclusive_write", 00:14:19.842 "zoned": false, 00:14:19.842 "supported_io_types": { 00:14:19.842 "read": true, 00:14:19.842 "write": true, 00:14:19.842 "unmap": true, 00:14:19.842 "flush": true, 00:14:19.842 "reset": true, 00:14:19.842 "nvme_admin": false, 00:14:19.842 "nvme_io": false, 00:14:19.842 "nvme_io_md": false, 00:14:19.842 "write_zeroes": true, 00:14:19.842 "zcopy": true, 00:14:19.842 "get_zone_info": false, 00:14:19.842 "zone_management": false, 00:14:19.842 "zone_append": false, 00:14:19.842 "compare": false, 00:14:19.842 "compare_and_write": false, 00:14:19.842 "abort": true, 00:14:19.842 "seek_hole": false, 00:14:19.842 "seek_data": false, 00:14:19.842 "copy": true, 00:14:19.843 "nvme_iov_md": false 00:14:19.843 }, 00:14:19.843 "memory_domains": [ 00:14:19.843 { 00:14:19.843 "dma_device_id": "system", 00:14:19.843 "dma_device_type": 1 00:14:19.843 }, 00:14:19.843 { 00:14:19.843 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:19.843 "dma_device_type": 2 00:14:19.843 } 00:14:19.843 ], 00:14:19.843 "driver_specific": {} 00:14:19.843 } 00:14:19.843 ] 00:14:19.843 13:34:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.843 13:34:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:19.843 13:34:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:14:19.843 13:34:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:19.843 13:34:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:19.843 13:34:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:19.843 13:34:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:19.843 13:34:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:19.843 13:34:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:19.843 13:34:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:19.843 13:34:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:19.843 13:34:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:19.843 13:34:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:19.843 13:34:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:19.843 13:34:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.843 13:34:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.844 13:34:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.844 13:34:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:19.844 "name": "Existed_Raid", 00:14:19.844 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:19.844 "strip_size_kb": 64, 00:14:19.844 "state": "configuring", 00:14:19.844 "raid_level": "concat", 00:14:19.844 "superblock": false, 00:14:19.844 "num_base_bdevs": 3, 00:14:19.844 "num_base_bdevs_discovered": 1, 00:14:19.844 "num_base_bdevs_operational": 3, 00:14:19.844 "base_bdevs_list": [ 00:14:19.844 { 00:14:19.844 "name": "BaseBdev1", 00:14:19.844 "uuid": "8a75ad71-bcc0-4bd0-8c6a-9e3a0c594b10", 00:14:19.844 "is_configured": true, 00:14:19.844 "data_offset": 0, 00:14:19.844 "data_size": 65536 00:14:19.844 }, 00:14:19.844 { 00:14:19.844 "name": "BaseBdev2", 00:14:19.844 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:19.844 "is_configured": false, 00:14:19.844 "data_offset": 0, 00:14:19.844 "data_size": 0 00:14:19.844 }, 00:14:19.844 { 00:14:19.844 "name": "BaseBdev3", 00:14:19.844 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:19.844 "is_configured": false, 00:14:19.844 "data_offset": 0, 00:14:19.844 "data_size": 0 00:14:19.844 } 00:14:19.844 ] 00:14:19.845 }' 00:14:19.845 13:34:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:19.845 13:34:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:20.110 13:34:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:20.110 13:34:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.110 13:34:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:20.110 [2024-11-20 13:34:19.537432] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:20.110 [2024-11-20 13:34:19.537661] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:14:20.110 13:34:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.110 13:34:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:20.110 13:34:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.110 13:34:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:20.110 [2024-11-20 13:34:19.549481] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:20.110 [2024-11-20 13:34:19.551722] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:20.110 [2024-11-20 13:34:19.551774] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:20.110 [2024-11-20 13:34:19.551787] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:20.110 [2024-11-20 13:34:19.551800] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:20.110 13:34:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.110 13:34:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:14:20.110 13:34:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:20.110 13:34:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:14:20.110 13:34:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:20.110 13:34:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:20.110 13:34:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:20.110 13:34:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:20.110 13:34:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:20.110 13:34:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:20.110 13:34:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:20.110 13:34:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:20.110 13:34:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:20.110 13:34:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:20.110 13:34:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.110 13:34:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:20.110 13:34:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:20.110 13:34:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.370 13:34:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:20.370 "name": "Existed_Raid", 00:14:20.370 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:20.370 "strip_size_kb": 64, 00:14:20.370 "state": "configuring", 00:14:20.370 "raid_level": "concat", 00:14:20.370 "superblock": false, 00:14:20.370 "num_base_bdevs": 3, 00:14:20.370 "num_base_bdevs_discovered": 1, 00:14:20.370 "num_base_bdevs_operational": 3, 00:14:20.370 "base_bdevs_list": [ 00:14:20.370 { 00:14:20.370 "name": "BaseBdev1", 00:14:20.370 "uuid": "8a75ad71-bcc0-4bd0-8c6a-9e3a0c594b10", 00:14:20.370 "is_configured": true, 00:14:20.370 "data_offset": 0, 00:14:20.370 "data_size": 65536 00:14:20.370 }, 00:14:20.370 { 00:14:20.370 "name": "BaseBdev2", 00:14:20.370 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:20.370 "is_configured": false, 00:14:20.370 "data_offset": 0, 00:14:20.370 "data_size": 0 00:14:20.370 }, 00:14:20.370 { 00:14:20.370 "name": "BaseBdev3", 00:14:20.370 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:20.370 "is_configured": false, 00:14:20.370 "data_offset": 0, 00:14:20.370 "data_size": 0 00:14:20.370 } 00:14:20.370 ] 00:14:20.370 }' 00:14:20.370 13:34:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:20.370 13:34:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:20.630 13:34:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:20.630 13:34:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.630 13:34:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:20.630 [2024-11-20 13:34:20.042781] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:20.630 BaseBdev2 00:14:20.630 13:34:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.630 13:34:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:14:20.630 13:34:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:14:20.630 13:34:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:20.630 13:34:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:20.630 13:34:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:20.630 13:34:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:20.630 13:34:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:20.630 13:34:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.630 13:34:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:20.630 13:34:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.630 13:34:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:20.630 13:34:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.630 13:34:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:20.630 [ 00:14:20.630 { 00:14:20.630 "name": "BaseBdev2", 00:14:20.630 "aliases": [ 00:14:20.630 "26e26123-880f-4c98-8129-98c8bbda5eee" 00:14:20.630 ], 00:14:20.630 "product_name": "Malloc disk", 00:14:20.630 "block_size": 512, 00:14:20.630 "num_blocks": 65536, 00:14:20.630 "uuid": "26e26123-880f-4c98-8129-98c8bbda5eee", 00:14:20.630 "assigned_rate_limits": { 00:14:20.630 "rw_ios_per_sec": 0, 00:14:20.630 "rw_mbytes_per_sec": 0, 00:14:20.630 "r_mbytes_per_sec": 0, 00:14:20.630 "w_mbytes_per_sec": 0 00:14:20.630 }, 00:14:20.630 "claimed": true, 00:14:20.630 "claim_type": "exclusive_write", 00:14:20.630 "zoned": false, 00:14:20.630 "supported_io_types": { 00:14:20.630 "read": true, 00:14:20.630 "write": true, 00:14:20.630 "unmap": true, 00:14:20.630 "flush": true, 00:14:20.630 "reset": true, 00:14:20.630 "nvme_admin": false, 00:14:20.630 "nvme_io": false, 00:14:20.630 "nvme_io_md": false, 00:14:20.630 "write_zeroes": true, 00:14:20.630 "zcopy": true, 00:14:20.630 "get_zone_info": false, 00:14:20.630 "zone_management": false, 00:14:20.630 "zone_append": false, 00:14:20.630 "compare": false, 00:14:20.630 "compare_and_write": false, 00:14:20.630 "abort": true, 00:14:20.630 "seek_hole": false, 00:14:20.630 "seek_data": false, 00:14:20.630 "copy": true, 00:14:20.630 "nvme_iov_md": false 00:14:20.630 }, 00:14:20.630 "memory_domains": [ 00:14:20.630 { 00:14:20.630 "dma_device_id": "system", 00:14:20.630 "dma_device_type": 1 00:14:20.630 }, 00:14:20.630 { 00:14:20.630 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:20.630 "dma_device_type": 2 00:14:20.630 } 00:14:20.630 ], 00:14:20.630 "driver_specific": {} 00:14:20.630 } 00:14:20.630 ] 00:14:20.630 13:34:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.630 13:34:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:20.630 13:34:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:20.630 13:34:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:20.630 13:34:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:14:20.630 13:34:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:20.630 13:34:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:20.630 13:34:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:20.630 13:34:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:20.630 13:34:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:20.630 13:34:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:20.630 13:34:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:20.630 13:34:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:20.630 13:34:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:20.630 13:34:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:20.630 13:34:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:20.630 13:34:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.630 13:34:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:20.889 13:34:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.889 13:34:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:20.889 "name": "Existed_Raid", 00:14:20.889 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:20.889 "strip_size_kb": 64, 00:14:20.889 "state": "configuring", 00:14:20.889 "raid_level": "concat", 00:14:20.889 "superblock": false, 00:14:20.889 "num_base_bdevs": 3, 00:14:20.889 "num_base_bdevs_discovered": 2, 00:14:20.889 "num_base_bdevs_operational": 3, 00:14:20.889 "base_bdevs_list": [ 00:14:20.889 { 00:14:20.889 "name": "BaseBdev1", 00:14:20.889 "uuid": "8a75ad71-bcc0-4bd0-8c6a-9e3a0c594b10", 00:14:20.889 "is_configured": true, 00:14:20.889 "data_offset": 0, 00:14:20.889 "data_size": 65536 00:14:20.889 }, 00:14:20.889 { 00:14:20.889 "name": "BaseBdev2", 00:14:20.889 "uuid": "26e26123-880f-4c98-8129-98c8bbda5eee", 00:14:20.889 "is_configured": true, 00:14:20.889 "data_offset": 0, 00:14:20.889 "data_size": 65536 00:14:20.889 }, 00:14:20.889 { 00:14:20.889 "name": "BaseBdev3", 00:14:20.889 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:20.889 "is_configured": false, 00:14:20.889 "data_offset": 0, 00:14:20.889 "data_size": 0 00:14:20.889 } 00:14:20.889 ] 00:14:20.889 }' 00:14:20.889 13:34:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:20.889 13:34:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:21.149 13:34:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:21.149 13:34:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.149 13:34:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:21.149 [2024-11-20 13:34:20.584847] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:21.149 [2024-11-20 13:34:20.584898] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:14:21.149 [2024-11-20 13:34:20.584913] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:14:21.149 [2024-11-20 13:34:20.585228] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:14:21.149 [2024-11-20 13:34:20.585398] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:14:21.149 [2024-11-20 13:34:20.585409] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:14:21.149 [2024-11-20 13:34:20.585676] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:21.149 BaseBdev3 00:14:21.149 13:34:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.149 13:34:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:14:21.149 13:34:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:14:21.149 13:34:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:21.149 13:34:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:21.149 13:34:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:21.149 13:34:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:21.149 13:34:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:21.149 13:34:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.149 13:34:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:21.149 13:34:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.149 13:34:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:21.149 13:34:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.149 13:34:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:21.149 [ 00:14:21.149 { 00:14:21.149 "name": "BaseBdev3", 00:14:21.149 "aliases": [ 00:14:21.149 "5ffffc17-ab80-4096-968f-4838080f68c1" 00:14:21.149 ], 00:14:21.149 "product_name": "Malloc disk", 00:14:21.149 "block_size": 512, 00:14:21.149 "num_blocks": 65536, 00:14:21.149 "uuid": "5ffffc17-ab80-4096-968f-4838080f68c1", 00:14:21.149 "assigned_rate_limits": { 00:14:21.149 "rw_ios_per_sec": 0, 00:14:21.149 "rw_mbytes_per_sec": 0, 00:14:21.149 "r_mbytes_per_sec": 0, 00:14:21.149 "w_mbytes_per_sec": 0 00:14:21.149 }, 00:14:21.149 "claimed": true, 00:14:21.149 "claim_type": "exclusive_write", 00:14:21.149 "zoned": false, 00:14:21.149 "supported_io_types": { 00:14:21.149 "read": true, 00:14:21.149 "write": true, 00:14:21.149 "unmap": true, 00:14:21.149 "flush": true, 00:14:21.149 "reset": true, 00:14:21.149 "nvme_admin": false, 00:14:21.149 "nvme_io": false, 00:14:21.149 "nvme_io_md": false, 00:14:21.149 "write_zeroes": true, 00:14:21.149 "zcopy": true, 00:14:21.149 "get_zone_info": false, 00:14:21.149 "zone_management": false, 00:14:21.149 "zone_append": false, 00:14:21.149 "compare": false, 00:14:21.149 "compare_and_write": false, 00:14:21.149 "abort": true, 00:14:21.149 "seek_hole": false, 00:14:21.149 "seek_data": false, 00:14:21.149 "copy": true, 00:14:21.149 "nvme_iov_md": false 00:14:21.149 }, 00:14:21.149 "memory_domains": [ 00:14:21.149 { 00:14:21.149 "dma_device_id": "system", 00:14:21.149 "dma_device_type": 1 00:14:21.149 }, 00:14:21.149 { 00:14:21.149 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:21.149 "dma_device_type": 2 00:14:21.149 } 00:14:21.149 ], 00:14:21.149 "driver_specific": {} 00:14:21.149 } 00:14:21.149 ] 00:14:21.149 13:34:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.149 13:34:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:21.149 13:34:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:21.149 13:34:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:21.149 13:34:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:14:21.149 13:34:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:21.149 13:34:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:21.149 13:34:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:21.149 13:34:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:21.149 13:34:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:21.149 13:34:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:21.149 13:34:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:21.149 13:34:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:21.408 13:34:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:21.409 13:34:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:21.409 13:34:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:21.409 13:34:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.409 13:34:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:21.409 13:34:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.409 13:34:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:21.409 "name": "Existed_Raid", 00:14:21.409 "uuid": "fb4ca374-2ea5-4da3-83c5-5d1f4c68b888", 00:14:21.409 "strip_size_kb": 64, 00:14:21.409 "state": "online", 00:14:21.409 "raid_level": "concat", 00:14:21.409 "superblock": false, 00:14:21.409 "num_base_bdevs": 3, 00:14:21.409 "num_base_bdevs_discovered": 3, 00:14:21.409 "num_base_bdevs_operational": 3, 00:14:21.409 "base_bdevs_list": [ 00:14:21.409 { 00:14:21.409 "name": "BaseBdev1", 00:14:21.409 "uuid": "8a75ad71-bcc0-4bd0-8c6a-9e3a0c594b10", 00:14:21.409 "is_configured": true, 00:14:21.409 "data_offset": 0, 00:14:21.409 "data_size": 65536 00:14:21.409 }, 00:14:21.409 { 00:14:21.409 "name": "BaseBdev2", 00:14:21.409 "uuid": "26e26123-880f-4c98-8129-98c8bbda5eee", 00:14:21.409 "is_configured": true, 00:14:21.409 "data_offset": 0, 00:14:21.409 "data_size": 65536 00:14:21.409 }, 00:14:21.409 { 00:14:21.409 "name": "BaseBdev3", 00:14:21.409 "uuid": "5ffffc17-ab80-4096-968f-4838080f68c1", 00:14:21.409 "is_configured": true, 00:14:21.409 "data_offset": 0, 00:14:21.409 "data_size": 65536 00:14:21.409 } 00:14:21.409 ] 00:14:21.409 }' 00:14:21.409 13:34:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:21.409 13:34:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:21.669 13:34:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:14:21.669 13:34:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:21.669 13:34:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:21.669 13:34:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:21.669 13:34:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:21.669 13:34:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:21.669 13:34:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:21.669 13:34:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:21.669 13:34:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.669 13:34:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:21.669 [2024-11-20 13:34:21.052592] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:21.669 13:34:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.669 13:34:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:21.669 "name": "Existed_Raid", 00:14:21.669 "aliases": [ 00:14:21.669 "fb4ca374-2ea5-4da3-83c5-5d1f4c68b888" 00:14:21.669 ], 00:14:21.669 "product_name": "Raid Volume", 00:14:21.669 "block_size": 512, 00:14:21.669 "num_blocks": 196608, 00:14:21.669 "uuid": "fb4ca374-2ea5-4da3-83c5-5d1f4c68b888", 00:14:21.669 "assigned_rate_limits": { 00:14:21.669 "rw_ios_per_sec": 0, 00:14:21.669 "rw_mbytes_per_sec": 0, 00:14:21.669 "r_mbytes_per_sec": 0, 00:14:21.669 "w_mbytes_per_sec": 0 00:14:21.669 }, 00:14:21.669 "claimed": false, 00:14:21.669 "zoned": false, 00:14:21.669 "supported_io_types": { 00:14:21.669 "read": true, 00:14:21.669 "write": true, 00:14:21.669 "unmap": true, 00:14:21.669 "flush": true, 00:14:21.669 "reset": true, 00:14:21.669 "nvme_admin": false, 00:14:21.669 "nvme_io": false, 00:14:21.669 "nvme_io_md": false, 00:14:21.669 "write_zeroes": true, 00:14:21.669 "zcopy": false, 00:14:21.669 "get_zone_info": false, 00:14:21.669 "zone_management": false, 00:14:21.669 "zone_append": false, 00:14:21.669 "compare": false, 00:14:21.669 "compare_and_write": false, 00:14:21.669 "abort": false, 00:14:21.669 "seek_hole": false, 00:14:21.669 "seek_data": false, 00:14:21.669 "copy": false, 00:14:21.669 "nvme_iov_md": false 00:14:21.669 }, 00:14:21.669 "memory_domains": [ 00:14:21.669 { 00:14:21.669 "dma_device_id": "system", 00:14:21.669 "dma_device_type": 1 00:14:21.669 }, 00:14:21.669 { 00:14:21.669 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:21.669 "dma_device_type": 2 00:14:21.669 }, 00:14:21.669 { 00:14:21.669 "dma_device_id": "system", 00:14:21.669 "dma_device_type": 1 00:14:21.669 }, 00:14:21.669 { 00:14:21.669 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:21.669 "dma_device_type": 2 00:14:21.669 }, 00:14:21.669 { 00:14:21.669 "dma_device_id": "system", 00:14:21.669 "dma_device_type": 1 00:14:21.669 }, 00:14:21.669 { 00:14:21.669 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:21.669 "dma_device_type": 2 00:14:21.669 } 00:14:21.669 ], 00:14:21.669 "driver_specific": { 00:14:21.669 "raid": { 00:14:21.669 "uuid": "fb4ca374-2ea5-4da3-83c5-5d1f4c68b888", 00:14:21.669 "strip_size_kb": 64, 00:14:21.669 "state": "online", 00:14:21.669 "raid_level": "concat", 00:14:21.669 "superblock": false, 00:14:21.669 "num_base_bdevs": 3, 00:14:21.669 "num_base_bdevs_discovered": 3, 00:14:21.669 "num_base_bdevs_operational": 3, 00:14:21.669 "base_bdevs_list": [ 00:14:21.669 { 00:14:21.669 "name": "BaseBdev1", 00:14:21.669 "uuid": "8a75ad71-bcc0-4bd0-8c6a-9e3a0c594b10", 00:14:21.669 "is_configured": true, 00:14:21.669 "data_offset": 0, 00:14:21.669 "data_size": 65536 00:14:21.669 }, 00:14:21.669 { 00:14:21.669 "name": "BaseBdev2", 00:14:21.669 "uuid": "26e26123-880f-4c98-8129-98c8bbda5eee", 00:14:21.669 "is_configured": true, 00:14:21.669 "data_offset": 0, 00:14:21.669 "data_size": 65536 00:14:21.669 }, 00:14:21.669 { 00:14:21.669 "name": "BaseBdev3", 00:14:21.669 "uuid": "5ffffc17-ab80-4096-968f-4838080f68c1", 00:14:21.669 "is_configured": true, 00:14:21.669 "data_offset": 0, 00:14:21.669 "data_size": 65536 00:14:21.669 } 00:14:21.669 ] 00:14:21.669 } 00:14:21.669 } 00:14:21.669 }' 00:14:21.670 13:34:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:21.670 13:34:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:14:21.670 BaseBdev2 00:14:21.670 BaseBdev3' 00:14:21.670 13:34:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:21.929 13:34:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:21.929 13:34:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:21.929 13:34:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:14:21.929 13:34:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.929 13:34:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:21.929 13:34:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:21.929 13:34:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.929 13:34:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:21.929 13:34:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:21.929 13:34:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:21.929 13:34:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:21.929 13:34:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.929 13:34:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:21.929 13:34:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:21.929 13:34:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.929 13:34:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:21.929 13:34:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:21.929 13:34:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:21.929 13:34:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:21.929 13:34:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:21.929 13:34:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.929 13:34:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:21.929 13:34:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.929 13:34:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:21.929 13:34:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:21.929 13:34:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:21.929 13:34:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.929 13:34:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:21.929 [2024-11-20 13:34:21.331963] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:21.929 [2024-11-20 13:34:21.332135] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:21.929 [2024-11-20 13:34:21.332222] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:22.188 13:34:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.188 13:34:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:14:22.188 13:34:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:14:22.188 13:34:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:22.188 13:34:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:14:22.188 13:34:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:14:22.188 13:34:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:14:22.188 13:34:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:22.188 13:34:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:14:22.188 13:34:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:22.188 13:34:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:22.188 13:34:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:22.188 13:34:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:22.188 13:34:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:22.188 13:34:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:22.188 13:34:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:22.188 13:34:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:22.188 13:34:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.188 13:34:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:22.188 13:34:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:22.188 13:34:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.188 13:34:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:22.188 "name": "Existed_Raid", 00:14:22.188 "uuid": "fb4ca374-2ea5-4da3-83c5-5d1f4c68b888", 00:14:22.188 "strip_size_kb": 64, 00:14:22.188 "state": "offline", 00:14:22.188 "raid_level": "concat", 00:14:22.188 "superblock": false, 00:14:22.188 "num_base_bdevs": 3, 00:14:22.188 "num_base_bdevs_discovered": 2, 00:14:22.188 "num_base_bdevs_operational": 2, 00:14:22.188 "base_bdevs_list": [ 00:14:22.188 { 00:14:22.188 "name": null, 00:14:22.188 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:22.188 "is_configured": false, 00:14:22.188 "data_offset": 0, 00:14:22.188 "data_size": 65536 00:14:22.188 }, 00:14:22.188 { 00:14:22.188 "name": "BaseBdev2", 00:14:22.188 "uuid": "26e26123-880f-4c98-8129-98c8bbda5eee", 00:14:22.188 "is_configured": true, 00:14:22.188 "data_offset": 0, 00:14:22.188 "data_size": 65536 00:14:22.188 }, 00:14:22.188 { 00:14:22.188 "name": "BaseBdev3", 00:14:22.188 "uuid": "5ffffc17-ab80-4096-968f-4838080f68c1", 00:14:22.188 "is_configured": true, 00:14:22.188 "data_offset": 0, 00:14:22.188 "data_size": 65536 00:14:22.188 } 00:14:22.188 ] 00:14:22.188 }' 00:14:22.188 13:34:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:22.188 13:34:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:22.447 13:34:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:14:22.447 13:34:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:22.447 13:34:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:22.447 13:34:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.447 13:34:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:22.447 13:34:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:22.447 13:34:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.447 13:34:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:22.447 13:34:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:22.447 13:34:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:14:22.447 13:34:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.447 13:34:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:22.447 [2024-11-20 13:34:21.861605] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:22.706 13:34:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.706 13:34:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:22.706 13:34:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:22.706 13:34:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:22.706 13:34:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.706 13:34:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:22.706 13:34:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:22.706 13:34:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.706 13:34:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:22.706 13:34:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:22.706 13:34:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:14:22.706 13:34:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.706 13:34:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:22.706 [2024-11-20 13:34:22.019776] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:22.706 [2024-11-20 13:34:22.019834] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:14:22.706 13:34:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.706 13:34:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:22.706 13:34:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:22.706 13:34:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:22.706 13:34:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.706 13:34:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:22.706 13:34:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:14:22.706 13:34:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.706 13:34:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:14:22.706 13:34:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:14:22.706 13:34:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:14:22.706 13:34:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:14:22.706 13:34:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:22.706 13:34:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:22.706 13:34:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.706 13:34:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:22.966 BaseBdev2 00:14:22.966 13:34:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.966 13:34:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:14:22.966 13:34:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:14:22.966 13:34:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:22.966 13:34:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:22.966 13:34:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:22.966 13:34:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:22.966 13:34:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:22.966 13:34:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.966 13:34:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:22.966 13:34:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.966 13:34:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:22.966 13:34:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.966 13:34:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:22.966 [ 00:14:22.966 { 00:14:22.966 "name": "BaseBdev2", 00:14:22.966 "aliases": [ 00:14:22.966 "38e909b5-6d0d-4c4d-afc0-b5981c85d3df" 00:14:22.966 ], 00:14:22.966 "product_name": "Malloc disk", 00:14:22.966 "block_size": 512, 00:14:22.966 "num_blocks": 65536, 00:14:22.966 "uuid": "38e909b5-6d0d-4c4d-afc0-b5981c85d3df", 00:14:22.966 "assigned_rate_limits": { 00:14:22.966 "rw_ios_per_sec": 0, 00:14:22.966 "rw_mbytes_per_sec": 0, 00:14:22.966 "r_mbytes_per_sec": 0, 00:14:22.966 "w_mbytes_per_sec": 0 00:14:22.966 }, 00:14:22.966 "claimed": false, 00:14:22.966 "zoned": false, 00:14:22.966 "supported_io_types": { 00:14:22.966 "read": true, 00:14:22.966 "write": true, 00:14:22.966 "unmap": true, 00:14:22.966 "flush": true, 00:14:22.966 "reset": true, 00:14:22.966 "nvme_admin": false, 00:14:22.966 "nvme_io": false, 00:14:22.966 "nvme_io_md": false, 00:14:22.966 "write_zeroes": true, 00:14:22.966 "zcopy": true, 00:14:22.966 "get_zone_info": false, 00:14:22.966 "zone_management": false, 00:14:22.966 "zone_append": false, 00:14:22.966 "compare": false, 00:14:22.966 "compare_and_write": false, 00:14:22.966 "abort": true, 00:14:22.966 "seek_hole": false, 00:14:22.966 "seek_data": false, 00:14:22.966 "copy": true, 00:14:22.966 "nvme_iov_md": false 00:14:22.966 }, 00:14:22.966 "memory_domains": [ 00:14:22.966 { 00:14:22.966 "dma_device_id": "system", 00:14:22.966 "dma_device_type": 1 00:14:22.966 }, 00:14:22.966 { 00:14:22.966 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:22.966 "dma_device_type": 2 00:14:22.966 } 00:14:22.966 ], 00:14:22.966 "driver_specific": {} 00:14:22.966 } 00:14:22.966 ] 00:14:22.966 13:34:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.966 13:34:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:22.966 13:34:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:22.966 13:34:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:22.966 13:34:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:22.966 13:34:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.966 13:34:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:22.966 BaseBdev3 00:14:22.966 13:34:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.966 13:34:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:14:22.966 13:34:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:14:22.966 13:34:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:22.966 13:34:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:22.966 13:34:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:22.966 13:34:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:22.966 13:34:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:22.966 13:34:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.966 13:34:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:22.966 13:34:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.966 13:34:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:22.966 13:34:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.966 13:34:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:22.966 [ 00:14:22.966 { 00:14:22.966 "name": "BaseBdev3", 00:14:22.966 "aliases": [ 00:14:22.966 "2ca7f81c-76cb-42d9-90a2-f02a20ca283d" 00:14:22.966 ], 00:14:22.966 "product_name": "Malloc disk", 00:14:22.966 "block_size": 512, 00:14:22.966 "num_blocks": 65536, 00:14:22.966 "uuid": "2ca7f81c-76cb-42d9-90a2-f02a20ca283d", 00:14:22.966 "assigned_rate_limits": { 00:14:22.966 "rw_ios_per_sec": 0, 00:14:22.966 "rw_mbytes_per_sec": 0, 00:14:22.966 "r_mbytes_per_sec": 0, 00:14:22.966 "w_mbytes_per_sec": 0 00:14:22.966 }, 00:14:22.966 "claimed": false, 00:14:22.966 "zoned": false, 00:14:22.966 "supported_io_types": { 00:14:22.966 "read": true, 00:14:22.966 "write": true, 00:14:22.966 "unmap": true, 00:14:22.966 "flush": true, 00:14:22.966 "reset": true, 00:14:22.966 "nvme_admin": false, 00:14:22.966 "nvme_io": false, 00:14:22.966 "nvme_io_md": false, 00:14:22.966 "write_zeroes": true, 00:14:22.966 "zcopy": true, 00:14:22.966 "get_zone_info": false, 00:14:22.966 "zone_management": false, 00:14:22.966 "zone_append": false, 00:14:22.966 "compare": false, 00:14:22.966 "compare_and_write": false, 00:14:22.966 "abort": true, 00:14:22.966 "seek_hole": false, 00:14:22.966 "seek_data": false, 00:14:22.966 "copy": true, 00:14:22.967 "nvme_iov_md": false 00:14:22.967 }, 00:14:22.967 "memory_domains": [ 00:14:22.967 { 00:14:22.967 "dma_device_id": "system", 00:14:22.967 "dma_device_type": 1 00:14:22.967 }, 00:14:22.967 { 00:14:22.967 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:22.967 "dma_device_type": 2 00:14:22.967 } 00:14:22.967 ], 00:14:22.967 "driver_specific": {} 00:14:22.967 } 00:14:22.967 ] 00:14:22.967 13:34:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.967 13:34:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:22.967 13:34:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:22.967 13:34:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:22.967 13:34:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:22.967 13:34:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.967 13:34:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:22.967 [2024-11-20 13:34:22.354312] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:22.967 [2024-11-20 13:34:22.354363] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:22.967 [2024-11-20 13:34:22.354393] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:22.967 [2024-11-20 13:34:22.356568] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:22.967 13:34:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.967 13:34:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:14:22.967 13:34:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:22.967 13:34:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:22.967 13:34:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:22.967 13:34:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:22.967 13:34:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:22.967 13:34:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:22.967 13:34:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:22.967 13:34:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:22.967 13:34:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:22.967 13:34:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:22.967 13:34:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.967 13:34:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:22.967 13:34:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:22.967 13:34:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.967 13:34:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:22.967 "name": "Existed_Raid", 00:14:22.967 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:22.967 "strip_size_kb": 64, 00:14:22.967 "state": "configuring", 00:14:22.967 "raid_level": "concat", 00:14:22.967 "superblock": false, 00:14:22.967 "num_base_bdevs": 3, 00:14:22.967 "num_base_bdevs_discovered": 2, 00:14:22.967 "num_base_bdevs_operational": 3, 00:14:22.967 "base_bdevs_list": [ 00:14:22.967 { 00:14:22.967 "name": "BaseBdev1", 00:14:22.967 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:22.967 "is_configured": false, 00:14:22.967 "data_offset": 0, 00:14:22.967 "data_size": 0 00:14:22.967 }, 00:14:22.967 { 00:14:22.967 "name": "BaseBdev2", 00:14:22.967 "uuid": "38e909b5-6d0d-4c4d-afc0-b5981c85d3df", 00:14:22.967 "is_configured": true, 00:14:22.967 "data_offset": 0, 00:14:22.967 "data_size": 65536 00:14:22.967 }, 00:14:22.967 { 00:14:22.967 "name": "BaseBdev3", 00:14:22.967 "uuid": "2ca7f81c-76cb-42d9-90a2-f02a20ca283d", 00:14:22.967 "is_configured": true, 00:14:22.967 "data_offset": 0, 00:14:22.967 "data_size": 65536 00:14:22.967 } 00:14:22.967 ] 00:14:22.967 }' 00:14:22.967 13:34:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:22.967 13:34:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:23.535 13:34:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:14:23.535 13:34:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.535 13:34:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:23.535 [2024-11-20 13:34:22.781782] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:23.535 13:34:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.535 13:34:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:14:23.535 13:34:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:23.535 13:34:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:23.535 13:34:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:23.535 13:34:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:23.535 13:34:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:23.535 13:34:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:23.535 13:34:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:23.535 13:34:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:23.535 13:34:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:23.535 13:34:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:23.535 13:34:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:23.535 13:34:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.535 13:34:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:23.535 13:34:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.535 13:34:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:23.535 "name": "Existed_Raid", 00:14:23.535 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:23.535 "strip_size_kb": 64, 00:14:23.535 "state": "configuring", 00:14:23.535 "raid_level": "concat", 00:14:23.535 "superblock": false, 00:14:23.535 "num_base_bdevs": 3, 00:14:23.535 "num_base_bdevs_discovered": 1, 00:14:23.535 "num_base_bdevs_operational": 3, 00:14:23.535 "base_bdevs_list": [ 00:14:23.535 { 00:14:23.535 "name": "BaseBdev1", 00:14:23.535 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:23.535 "is_configured": false, 00:14:23.535 "data_offset": 0, 00:14:23.535 "data_size": 0 00:14:23.535 }, 00:14:23.535 { 00:14:23.535 "name": null, 00:14:23.535 "uuid": "38e909b5-6d0d-4c4d-afc0-b5981c85d3df", 00:14:23.535 "is_configured": false, 00:14:23.535 "data_offset": 0, 00:14:23.535 "data_size": 65536 00:14:23.535 }, 00:14:23.535 { 00:14:23.535 "name": "BaseBdev3", 00:14:23.535 "uuid": "2ca7f81c-76cb-42d9-90a2-f02a20ca283d", 00:14:23.535 "is_configured": true, 00:14:23.535 "data_offset": 0, 00:14:23.535 "data_size": 65536 00:14:23.535 } 00:14:23.535 ] 00:14:23.535 }' 00:14:23.535 13:34:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:23.535 13:34:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:23.795 13:34:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:23.795 13:34:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:23.795 13:34:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.795 13:34:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:23.795 13:34:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.795 13:34:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:14:23.795 13:34:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:23.795 13:34:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.795 13:34:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:23.795 [2024-11-20 13:34:23.276866] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:23.795 BaseBdev1 00:14:23.795 13:34:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.795 13:34:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:14:23.795 13:34:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:14:23.795 13:34:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:23.795 13:34:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:23.795 13:34:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:23.795 13:34:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:23.795 13:34:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:23.795 13:34:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.795 13:34:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.055 13:34:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.055 13:34:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:24.055 13:34:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.055 13:34:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.055 [ 00:14:24.055 { 00:14:24.055 "name": "BaseBdev1", 00:14:24.055 "aliases": [ 00:14:24.055 "6e19e6e5-8d1d-48ad-96b4-855a140857b6" 00:14:24.055 ], 00:14:24.055 "product_name": "Malloc disk", 00:14:24.055 "block_size": 512, 00:14:24.055 "num_blocks": 65536, 00:14:24.055 "uuid": "6e19e6e5-8d1d-48ad-96b4-855a140857b6", 00:14:24.055 "assigned_rate_limits": { 00:14:24.055 "rw_ios_per_sec": 0, 00:14:24.055 "rw_mbytes_per_sec": 0, 00:14:24.055 "r_mbytes_per_sec": 0, 00:14:24.055 "w_mbytes_per_sec": 0 00:14:24.055 }, 00:14:24.055 "claimed": true, 00:14:24.055 "claim_type": "exclusive_write", 00:14:24.055 "zoned": false, 00:14:24.055 "supported_io_types": { 00:14:24.055 "read": true, 00:14:24.055 "write": true, 00:14:24.055 "unmap": true, 00:14:24.055 "flush": true, 00:14:24.055 "reset": true, 00:14:24.055 "nvme_admin": false, 00:14:24.055 "nvme_io": false, 00:14:24.055 "nvme_io_md": false, 00:14:24.055 "write_zeroes": true, 00:14:24.055 "zcopy": true, 00:14:24.055 "get_zone_info": false, 00:14:24.055 "zone_management": false, 00:14:24.055 "zone_append": false, 00:14:24.055 "compare": false, 00:14:24.055 "compare_and_write": false, 00:14:24.055 "abort": true, 00:14:24.055 "seek_hole": false, 00:14:24.055 "seek_data": false, 00:14:24.055 "copy": true, 00:14:24.055 "nvme_iov_md": false 00:14:24.055 }, 00:14:24.055 "memory_domains": [ 00:14:24.055 { 00:14:24.055 "dma_device_id": "system", 00:14:24.055 "dma_device_type": 1 00:14:24.055 }, 00:14:24.055 { 00:14:24.055 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:24.055 "dma_device_type": 2 00:14:24.055 } 00:14:24.055 ], 00:14:24.055 "driver_specific": {} 00:14:24.055 } 00:14:24.055 ] 00:14:24.055 13:34:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.055 13:34:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:24.055 13:34:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:14:24.055 13:34:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:24.055 13:34:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:24.055 13:34:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:24.055 13:34:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:24.055 13:34:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:24.055 13:34:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:24.055 13:34:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:24.055 13:34:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:24.055 13:34:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:24.055 13:34:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:24.055 13:34:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:24.055 13:34:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.055 13:34:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.055 13:34:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.055 13:34:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:24.055 "name": "Existed_Raid", 00:14:24.055 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:24.055 "strip_size_kb": 64, 00:14:24.055 "state": "configuring", 00:14:24.055 "raid_level": "concat", 00:14:24.055 "superblock": false, 00:14:24.055 "num_base_bdevs": 3, 00:14:24.055 "num_base_bdevs_discovered": 2, 00:14:24.055 "num_base_bdevs_operational": 3, 00:14:24.055 "base_bdevs_list": [ 00:14:24.055 { 00:14:24.055 "name": "BaseBdev1", 00:14:24.055 "uuid": "6e19e6e5-8d1d-48ad-96b4-855a140857b6", 00:14:24.055 "is_configured": true, 00:14:24.055 "data_offset": 0, 00:14:24.055 "data_size": 65536 00:14:24.055 }, 00:14:24.055 { 00:14:24.055 "name": null, 00:14:24.055 "uuid": "38e909b5-6d0d-4c4d-afc0-b5981c85d3df", 00:14:24.055 "is_configured": false, 00:14:24.055 "data_offset": 0, 00:14:24.055 "data_size": 65536 00:14:24.055 }, 00:14:24.055 { 00:14:24.055 "name": "BaseBdev3", 00:14:24.055 "uuid": "2ca7f81c-76cb-42d9-90a2-f02a20ca283d", 00:14:24.055 "is_configured": true, 00:14:24.055 "data_offset": 0, 00:14:24.055 "data_size": 65536 00:14:24.055 } 00:14:24.055 ] 00:14:24.055 }' 00:14:24.055 13:34:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:24.055 13:34:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.318 13:34:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:24.318 13:34:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:24.318 13:34:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.318 13:34:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.318 13:34:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.318 13:34:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:14:24.318 13:34:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:14:24.318 13:34:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.318 13:34:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.318 [2024-11-20 13:34:23.784252] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:24.318 13:34:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.318 13:34:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:14:24.318 13:34:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:24.318 13:34:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:24.318 13:34:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:24.318 13:34:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:24.318 13:34:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:24.318 13:34:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:24.318 13:34:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:24.318 13:34:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:24.318 13:34:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:24.318 13:34:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:24.318 13:34:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:24.318 13:34:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.318 13:34:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.595 13:34:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.595 13:34:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:24.595 "name": "Existed_Raid", 00:14:24.595 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:24.595 "strip_size_kb": 64, 00:14:24.595 "state": "configuring", 00:14:24.595 "raid_level": "concat", 00:14:24.595 "superblock": false, 00:14:24.595 "num_base_bdevs": 3, 00:14:24.595 "num_base_bdevs_discovered": 1, 00:14:24.595 "num_base_bdevs_operational": 3, 00:14:24.595 "base_bdevs_list": [ 00:14:24.595 { 00:14:24.595 "name": "BaseBdev1", 00:14:24.595 "uuid": "6e19e6e5-8d1d-48ad-96b4-855a140857b6", 00:14:24.595 "is_configured": true, 00:14:24.595 "data_offset": 0, 00:14:24.595 "data_size": 65536 00:14:24.595 }, 00:14:24.595 { 00:14:24.595 "name": null, 00:14:24.595 "uuid": "38e909b5-6d0d-4c4d-afc0-b5981c85d3df", 00:14:24.595 "is_configured": false, 00:14:24.595 "data_offset": 0, 00:14:24.595 "data_size": 65536 00:14:24.595 }, 00:14:24.595 { 00:14:24.595 "name": null, 00:14:24.595 "uuid": "2ca7f81c-76cb-42d9-90a2-f02a20ca283d", 00:14:24.595 "is_configured": false, 00:14:24.595 "data_offset": 0, 00:14:24.595 "data_size": 65536 00:14:24.595 } 00:14:24.595 ] 00:14:24.595 }' 00:14:24.595 13:34:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:24.595 13:34:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.854 13:34:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:24.854 13:34:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:24.854 13:34:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.854 13:34:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.854 13:34:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.854 13:34:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:14:24.854 13:34:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:14:24.854 13:34:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.854 13:34:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.854 [2024-11-20 13:34:24.235663] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:24.854 13:34:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.854 13:34:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:14:24.854 13:34:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:24.854 13:34:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:24.854 13:34:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:24.854 13:34:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:24.854 13:34:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:24.854 13:34:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:24.854 13:34:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:24.854 13:34:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:24.854 13:34:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:24.854 13:34:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:24.854 13:34:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.854 13:34:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:24.854 13:34:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.854 13:34:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.854 13:34:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:24.854 "name": "Existed_Raid", 00:14:24.854 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:24.854 "strip_size_kb": 64, 00:14:24.854 "state": "configuring", 00:14:24.854 "raid_level": "concat", 00:14:24.855 "superblock": false, 00:14:24.855 "num_base_bdevs": 3, 00:14:24.855 "num_base_bdevs_discovered": 2, 00:14:24.855 "num_base_bdevs_operational": 3, 00:14:24.855 "base_bdevs_list": [ 00:14:24.855 { 00:14:24.855 "name": "BaseBdev1", 00:14:24.855 "uuid": "6e19e6e5-8d1d-48ad-96b4-855a140857b6", 00:14:24.855 "is_configured": true, 00:14:24.855 "data_offset": 0, 00:14:24.855 "data_size": 65536 00:14:24.855 }, 00:14:24.855 { 00:14:24.855 "name": null, 00:14:24.855 "uuid": "38e909b5-6d0d-4c4d-afc0-b5981c85d3df", 00:14:24.855 "is_configured": false, 00:14:24.855 "data_offset": 0, 00:14:24.855 "data_size": 65536 00:14:24.855 }, 00:14:24.855 { 00:14:24.855 "name": "BaseBdev3", 00:14:24.855 "uuid": "2ca7f81c-76cb-42d9-90a2-f02a20ca283d", 00:14:24.855 "is_configured": true, 00:14:24.855 "data_offset": 0, 00:14:24.855 "data_size": 65536 00:14:24.855 } 00:14:24.855 ] 00:14:24.855 }' 00:14:24.855 13:34:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:24.855 13:34:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:25.421 13:34:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:25.421 13:34:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:25.421 13:34:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.421 13:34:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:25.421 13:34:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.421 13:34:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:14:25.421 13:34:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:25.421 13:34:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.421 13:34:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:25.421 [2024-11-20 13:34:24.727000] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:25.421 13:34:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.421 13:34:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:14:25.421 13:34:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:25.421 13:34:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:25.421 13:34:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:25.421 13:34:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:25.421 13:34:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:25.421 13:34:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:25.421 13:34:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:25.421 13:34:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:25.421 13:34:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:25.421 13:34:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:25.421 13:34:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:25.421 13:34:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.421 13:34:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:25.421 13:34:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.421 13:34:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:25.421 "name": "Existed_Raid", 00:14:25.421 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:25.421 "strip_size_kb": 64, 00:14:25.421 "state": "configuring", 00:14:25.421 "raid_level": "concat", 00:14:25.421 "superblock": false, 00:14:25.421 "num_base_bdevs": 3, 00:14:25.421 "num_base_bdevs_discovered": 1, 00:14:25.421 "num_base_bdevs_operational": 3, 00:14:25.421 "base_bdevs_list": [ 00:14:25.421 { 00:14:25.421 "name": null, 00:14:25.421 "uuid": "6e19e6e5-8d1d-48ad-96b4-855a140857b6", 00:14:25.422 "is_configured": false, 00:14:25.422 "data_offset": 0, 00:14:25.422 "data_size": 65536 00:14:25.422 }, 00:14:25.422 { 00:14:25.422 "name": null, 00:14:25.422 "uuid": "38e909b5-6d0d-4c4d-afc0-b5981c85d3df", 00:14:25.422 "is_configured": false, 00:14:25.422 "data_offset": 0, 00:14:25.422 "data_size": 65536 00:14:25.422 }, 00:14:25.422 { 00:14:25.422 "name": "BaseBdev3", 00:14:25.422 "uuid": "2ca7f81c-76cb-42d9-90a2-f02a20ca283d", 00:14:25.422 "is_configured": true, 00:14:25.422 "data_offset": 0, 00:14:25.422 "data_size": 65536 00:14:25.422 } 00:14:25.422 ] 00:14:25.422 }' 00:14:25.422 13:34:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:25.422 13:34:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:25.990 13:34:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:25.990 13:34:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.990 13:34:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:25.990 13:34:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:25.990 13:34:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.991 13:34:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:14:25.991 13:34:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:14:25.991 13:34:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.991 13:34:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:25.991 [2024-11-20 13:34:25.276940] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:25.991 13:34:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.991 13:34:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:14:25.991 13:34:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:25.991 13:34:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:25.991 13:34:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:25.991 13:34:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:25.991 13:34:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:25.991 13:34:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:25.991 13:34:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:25.991 13:34:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:25.991 13:34:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:25.991 13:34:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:25.991 13:34:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.991 13:34:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:25.991 13:34:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:25.991 13:34:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.991 13:34:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:25.991 "name": "Existed_Raid", 00:14:25.991 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:25.991 "strip_size_kb": 64, 00:14:25.991 "state": "configuring", 00:14:25.991 "raid_level": "concat", 00:14:25.991 "superblock": false, 00:14:25.991 "num_base_bdevs": 3, 00:14:25.991 "num_base_bdevs_discovered": 2, 00:14:25.991 "num_base_bdevs_operational": 3, 00:14:25.991 "base_bdevs_list": [ 00:14:25.991 { 00:14:25.991 "name": null, 00:14:25.991 "uuid": "6e19e6e5-8d1d-48ad-96b4-855a140857b6", 00:14:25.991 "is_configured": false, 00:14:25.991 "data_offset": 0, 00:14:25.991 "data_size": 65536 00:14:25.991 }, 00:14:25.991 { 00:14:25.991 "name": "BaseBdev2", 00:14:25.991 "uuid": "38e909b5-6d0d-4c4d-afc0-b5981c85d3df", 00:14:25.991 "is_configured": true, 00:14:25.991 "data_offset": 0, 00:14:25.991 "data_size": 65536 00:14:25.991 }, 00:14:25.991 { 00:14:25.991 "name": "BaseBdev3", 00:14:25.991 "uuid": "2ca7f81c-76cb-42d9-90a2-f02a20ca283d", 00:14:25.991 "is_configured": true, 00:14:25.991 "data_offset": 0, 00:14:25.991 "data_size": 65536 00:14:25.991 } 00:14:25.991 ] 00:14:25.991 }' 00:14:25.991 13:34:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:25.991 13:34:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:26.250 13:34:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:26.250 13:34:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.250 13:34:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:26.250 13:34:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:26.510 13:34:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.510 13:34:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:14:26.510 13:34:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:26.510 13:34:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.510 13:34:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:26.510 13:34:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:14:26.510 13:34:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.510 13:34:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 6e19e6e5-8d1d-48ad-96b4-855a140857b6 00:14:26.510 13:34:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.510 13:34:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:26.510 [2024-11-20 13:34:25.863677] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:14:26.510 [2024-11-20 13:34:25.863722] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:14:26.510 [2024-11-20 13:34:25.863734] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:14:26.510 [2024-11-20 13:34:25.864003] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:14:26.510 [2024-11-20 13:34:25.864185] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:14:26.510 [2024-11-20 13:34:25.864197] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:14:26.510 [2024-11-20 13:34:25.864485] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:26.510 NewBaseBdev 00:14:26.510 13:34:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.510 13:34:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:14:26.510 13:34:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:14:26.510 13:34:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:26.510 13:34:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:26.510 13:34:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:26.510 13:34:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:26.510 13:34:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:26.510 13:34:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.510 13:34:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:26.510 13:34:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.510 13:34:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:14:26.510 13:34:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.510 13:34:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:26.510 [ 00:14:26.510 { 00:14:26.510 "name": "NewBaseBdev", 00:14:26.510 "aliases": [ 00:14:26.510 "6e19e6e5-8d1d-48ad-96b4-855a140857b6" 00:14:26.510 ], 00:14:26.510 "product_name": "Malloc disk", 00:14:26.510 "block_size": 512, 00:14:26.510 "num_blocks": 65536, 00:14:26.510 "uuid": "6e19e6e5-8d1d-48ad-96b4-855a140857b6", 00:14:26.510 "assigned_rate_limits": { 00:14:26.510 "rw_ios_per_sec": 0, 00:14:26.510 "rw_mbytes_per_sec": 0, 00:14:26.510 "r_mbytes_per_sec": 0, 00:14:26.510 "w_mbytes_per_sec": 0 00:14:26.510 }, 00:14:26.510 "claimed": true, 00:14:26.510 "claim_type": "exclusive_write", 00:14:26.510 "zoned": false, 00:14:26.510 "supported_io_types": { 00:14:26.510 "read": true, 00:14:26.510 "write": true, 00:14:26.510 "unmap": true, 00:14:26.510 "flush": true, 00:14:26.510 "reset": true, 00:14:26.510 "nvme_admin": false, 00:14:26.510 "nvme_io": false, 00:14:26.510 "nvme_io_md": false, 00:14:26.510 "write_zeroes": true, 00:14:26.510 "zcopy": true, 00:14:26.510 "get_zone_info": false, 00:14:26.510 "zone_management": false, 00:14:26.510 "zone_append": false, 00:14:26.510 "compare": false, 00:14:26.510 "compare_and_write": false, 00:14:26.510 "abort": true, 00:14:26.510 "seek_hole": false, 00:14:26.510 "seek_data": false, 00:14:26.510 "copy": true, 00:14:26.510 "nvme_iov_md": false 00:14:26.510 }, 00:14:26.510 "memory_domains": [ 00:14:26.510 { 00:14:26.510 "dma_device_id": "system", 00:14:26.510 "dma_device_type": 1 00:14:26.510 }, 00:14:26.510 { 00:14:26.510 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:26.510 "dma_device_type": 2 00:14:26.510 } 00:14:26.510 ], 00:14:26.510 "driver_specific": {} 00:14:26.510 } 00:14:26.510 ] 00:14:26.510 13:34:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.510 13:34:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:26.510 13:34:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:14:26.510 13:34:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:26.510 13:34:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:26.510 13:34:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:26.510 13:34:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:26.510 13:34:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:26.510 13:34:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:26.510 13:34:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:26.510 13:34:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:26.510 13:34:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:26.510 13:34:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:26.510 13:34:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.510 13:34:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:26.510 13:34:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:26.510 13:34:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.510 13:34:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:26.510 "name": "Existed_Raid", 00:14:26.510 "uuid": "e2093a99-dcd9-4568-9064-271d1a4c6c2c", 00:14:26.510 "strip_size_kb": 64, 00:14:26.510 "state": "online", 00:14:26.510 "raid_level": "concat", 00:14:26.510 "superblock": false, 00:14:26.510 "num_base_bdevs": 3, 00:14:26.510 "num_base_bdevs_discovered": 3, 00:14:26.510 "num_base_bdevs_operational": 3, 00:14:26.510 "base_bdevs_list": [ 00:14:26.510 { 00:14:26.510 "name": "NewBaseBdev", 00:14:26.510 "uuid": "6e19e6e5-8d1d-48ad-96b4-855a140857b6", 00:14:26.510 "is_configured": true, 00:14:26.510 "data_offset": 0, 00:14:26.510 "data_size": 65536 00:14:26.510 }, 00:14:26.510 { 00:14:26.510 "name": "BaseBdev2", 00:14:26.510 "uuid": "38e909b5-6d0d-4c4d-afc0-b5981c85d3df", 00:14:26.510 "is_configured": true, 00:14:26.510 "data_offset": 0, 00:14:26.510 "data_size": 65536 00:14:26.510 }, 00:14:26.510 { 00:14:26.511 "name": "BaseBdev3", 00:14:26.511 "uuid": "2ca7f81c-76cb-42d9-90a2-f02a20ca283d", 00:14:26.511 "is_configured": true, 00:14:26.511 "data_offset": 0, 00:14:26.511 "data_size": 65536 00:14:26.511 } 00:14:26.511 ] 00:14:26.511 }' 00:14:26.511 13:34:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:26.511 13:34:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.079 13:34:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:14:27.080 13:34:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:27.080 13:34:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:27.080 13:34:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:27.080 13:34:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:27.080 13:34:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:27.080 13:34:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:27.080 13:34:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.080 13:34:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.080 13:34:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:27.080 [2024-11-20 13:34:26.411285] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:27.080 13:34:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.080 13:34:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:27.080 "name": "Existed_Raid", 00:14:27.080 "aliases": [ 00:14:27.080 "e2093a99-dcd9-4568-9064-271d1a4c6c2c" 00:14:27.080 ], 00:14:27.080 "product_name": "Raid Volume", 00:14:27.080 "block_size": 512, 00:14:27.080 "num_blocks": 196608, 00:14:27.080 "uuid": "e2093a99-dcd9-4568-9064-271d1a4c6c2c", 00:14:27.080 "assigned_rate_limits": { 00:14:27.080 "rw_ios_per_sec": 0, 00:14:27.080 "rw_mbytes_per_sec": 0, 00:14:27.080 "r_mbytes_per_sec": 0, 00:14:27.080 "w_mbytes_per_sec": 0 00:14:27.080 }, 00:14:27.080 "claimed": false, 00:14:27.080 "zoned": false, 00:14:27.080 "supported_io_types": { 00:14:27.080 "read": true, 00:14:27.080 "write": true, 00:14:27.080 "unmap": true, 00:14:27.080 "flush": true, 00:14:27.080 "reset": true, 00:14:27.080 "nvme_admin": false, 00:14:27.080 "nvme_io": false, 00:14:27.080 "nvme_io_md": false, 00:14:27.080 "write_zeroes": true, 00:14:27.080 "zcopy": false, 00:14:27.080 "get_zone_info": false, 00:14:27.080 "zone_management": false, 00:14:27.080 "zone_append": false, 00:14:27.080 "compare": false, 00:14:27.080 "compare_and_write": false, 00:14:27.080 "abort": false, 00:14:27.080 "seek_hole": false, 00:14:27.080 "seek_data": false, 00:14:27.080 "copy": false, 00:14:27.080 "nvme_iov_md": false 00:14:27.080 }, 00:14:27.080 "memory_domains": [ 00:14:27.080 { 00:14:27.080 "dma_device_id": "system", 00:14:27.080 "dma_device_type": 1 00:14:27.080 }, 00:14:27.080 { 00:14:27.080 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:27.080 "dma_device_type": 2 00:14:27.080 }, 00:14:27.080 { 00:14:27.080 "dma_device_id": "system", 00:14:27.080 "dma_device_type": 1 00:14:27.080 }, 00:14:27.080 { 00:14:27.080 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:27.080 "dma_device_type": 2 00:14:27.080 }, 00:14:27.080 { 00:14:27.080 "dma_device_id": "system", 00:14:27.080 "dma_device_type": 1 00:14:27.080 }, 00:14:27.080 { 00:14:27.080 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:27.080 "dma_device_type": 2 00:14:27.080 } 00:14:27.080 ], 00:14:27.080 "driver_specific": { 00:14:27.080 "raid": { 00:14:27.080 "uuid": "e2093a99-dcd9-4568-9064-271d1a4c6c2c", 00:14:27.080 "strip_size_kb": 64, 00:14:27.080 "state": "online", 00:14:27.080 "raid_level": "concat", 00:14:27.080 "superblock": false, 00:14:27.080 "num_base_bdevs": 3, 00:14:27.080 "num_base_bdevs_discovered": 3, 00:14:27.080 "num_base_bdevs_operational": 3, 00:14:27.080 "base_bdevs_list": [ 00:14:27.080 { 00:14:27.080 "name": "NewBaseBdev", 00:14:27.080 "uuid": "6e19e6e5-8d1d-48ad-96b4-855a140857b6", 00:14:27.080 "is_configured": true, 00:14:27.080 "data_offset": 0, 00:14:27.080 "data_size": 65536 00:14:27.080 }, 00:14:27.080 { 00:14:27.080 "name": "BaseBdev2", 00:14:27.080 "uuid": "38e909b5-6d0d-4c4d-afc0-b5981c85d3df", 00:14:27.080 "is_configured": true, 00:14:27.080 "data_offset": 0, 00:14:27.080 "data_size": 65536 00:14:27.080 }, 00:14:27.080 { 00:14:27.080 "name": "BaseBdev3", 00:14:27.080 "uuid": "2ca7f81c-76cb-42d9-90a2-f02a20ca283d", 00:14:27.080 "is_configured": true, 00:14:27.080 "data_offset": 0, 00:14:27.080 "data_size": 65536 00:14:27.080 } 00:14:27.080 ] 00:14:27.080 } 00:14:27.080 } 00:14:27.080 }' 00:14:27.080 13:34:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:27.080 13:34:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:14:27.080 BaseBdev2 00:14:27.080 BaseBdev3' 00:14:27.080 13:34:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:27.080 13:34:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:27.080 13:34:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:27.080 13:34:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:14:27.080 13:34:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.080 13:34:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:27.080 13:34:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.340 13:34:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.340 13:34:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:27.340 13:34:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:27.340 13:34:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:27.340 13:34:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:27.340 13:34:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.340 13:34:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.340 13:34:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:27.340 13:34:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.340 13:34:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:27.340 13:34:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:27.340 13:34:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:27.340 13:34:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:27.340 13:34:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:27.340 13:34:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.340 13:34:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.340 13:34:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.340 13:34:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:27.340 13:34:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:27.340 13:34:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:27.340 13:34:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.340 13:34:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.340 [2024-11-20 13:34:26.718534] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:27.340 [2024-11-20 13:34:26.718567] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:27.340 [2024-11-20 13:34:26.718653] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:27.340 [2024-11-20 13:34:26.718710] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:27.340 [2024-11-20 13:34:26.718725] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:14:27.340 13:34:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.340 13:34:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 65362 00:14:27.340 13:34:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 65362 ']' 00:14:27.340 13:34:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 65362 00:14:27.340 13:34:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:14:27.340 13:34:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:27.340 13:34:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65362 00:14:27.340 13:34:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:27.340 13:34:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:27.340 13:34:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65362' 00:14:27.340 killing process with pid 65362 00:14:27.340 13:34:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 65362 00:14:27.340 [2024-11-20 13:34:26.773586] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:27.340 13:34:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 65362 00:14:27.907 [2024-11-20 13:34:27.094951] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:28.844 13:34:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:14:28.844 00:14:28.844 real 0m10.761s 00:14:28.844 user 0m17.083s 00:14:28.844 sys 0m1.966s 00:14:28.844 13:34:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:28.844 ************************************ 00:14:28.844 END TEST raid_state_function_test 00:14:28.844 ************************************ 00:14:28.844 13:34:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.103 13:34:28 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 3 true 00:14:29.103 13:34:28 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:14:29.103 13:34:28 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:29.103 13:34:28 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:29.103 ************************************ 00:14:29.103 START TEST raid_state_function_test_sb 00:14:29.103 ************************************ 00:14:29.103 13:34:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 3 true 00:14:29.103 13:34:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:14:29.103 13:34:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:14:29.103 13:34:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:14:29.103 13:34:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:14:29.103 13:34:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:14:29.103 13:34:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:29.103 13:34:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:14:29.103 13:34:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:29.103 13:34:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:29.103 13:34:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:14:29.103 13:34:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:29.103 13:34:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:29.103 13:34:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:14:29.103 13:34:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:29.103 13:34:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:29.103 13:34:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:14:29.103 13:34:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:14:29.103 13:34:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:14:29.103 13:34:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:14:29.103 13:34:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:14:29.103 13:34:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:14:29.103 13:34:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:14:29.104 13:34:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:14:29.104 13:34:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:14:29.104 13:34:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:14:29.104 13:34:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:14:29.104 13:34:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:14:29.104 13:34:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=65991 00:14:29.104 13:34:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 65991' 00:14:29.104 Process raid pid: 65991 00:14:29.104 13:34:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 65991 00:14:29.104 13:34:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 65991 ']' 00:14:29.104 13:34:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:29.104 13:34:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:29.104 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:29.104 13:34:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:29.104 13:34:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:29.104 13:34:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:29.104 [2024-11-20 13:34:28.480739] Starting SPDK v25.01-pre git sha1 82b85d9ca / DPDK 24.03.0 initialization... 00:14:29.104 [2024-11-20 13:34:28.480994] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:29.363 [2024-11-20 13:34:28.671647] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:29.363 [2024-11-20 13:34:28.797443] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:29.708 [2024-11-20 13:34:29.035589] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:29.708 [2024-11-20 13:34:29.035641] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:30.283 13:34:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:30.283 13:34:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:14:30.283 13:34:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:30.283 13:34:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.283 13:34:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:30.283 [2024-11-20 13:34:29.469521] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:30.284 [2024-11-20 13:34:29.469582] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:30.284 [2024-11-20 13:34:29.469594] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:30.284 [2024-11-20 13:34:29.469607] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:30.284 [2024-11-20 13:34:29.469632] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:30.284 [2024-11-20 13:34:29.469645] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:30.284 13:34:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.284 13:34:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:14:30.284 13:34:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:30.284 13:34:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:30.284 13:34:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:30.284 13:34:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:30.284 13:34:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:30.284 13:34:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:30.284 13:34:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:30.284 13:34:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:30.284 13:34:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:30.284 13:34:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:30.284 13:34:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.284 13:34:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:30.284 13:34:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:30.284 13:34:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.284 13:34:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:30.284 "name": "Existed_Raid", 00:14:30.284 "uuid": "7038588d-cafa-47e5-a085-8496ebf0eb0d", 00:14:30.284 "strip_size_kb": 64, 00:14:30.284 "state": "configuring", 00:14:30.284 "raid_level": "concat", 00:14:30.284 "superblock": true, 00:14:30.284 "num_base_bdevs": 3, 00:14:30.284 "num_base_bdevs_discovered": 0, 00:14:30.284 "num_base_bdevs_operational": 3, 00:14:30.284 "base_bdevs_list": [ 00:14:30.284 { 00:14:30.284 "name": "BaseBdev1", 00:14:30.284 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:30.284 "is_configured": false, 00:14:30.284 "data_offset": 0, 00:14:30.284 "data_size": 0 00:14:30.284 }, 00:14:30.284 { 00:14:30.284 "name": "BaseBdev2", 00:14:30.284 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:30.284 "is_configured": false, 00:14:30.284 "data_offset": 0, 00:14:30.284 "data_size": 0 00:14:30.284 }, 00:14:30.284 { 00:14:30.284 "name": "BaseBdev3", 00:14:30.284 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:30.284 "is_configured": false, 00:14:30.284 "data_offset": 0, 00:14:30.284 "data_size": 0 00:14:30.284 } 00:14:30.284 ] 00:14:30.284 }' 00:14:30.284 13:34:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:30.284 13:34:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:30.544 13:34:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:30.544 13:34:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.544 13:34:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:30.544 [2024-11-20 13:34:29.900849] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:30.544 [2024-11-20 13:34:29.901573] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:14:30.544 13:34:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.544 13:34:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:30.544 13:34:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.544 13:34:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:30.544 [2024-11-20 13:34:29.908842] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:30.544 [2024-11-20 13:34:29.908897] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:30.544 [2024-11-20 13:34:29.908908] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:30.544 [2024-11-20 13:34:29.908921] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:30.544 [2024-11-20 13:34:29.908929] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:30.544 [2024-11-20 13:34:29.908942] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:30.544 13:34:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.544 13:34:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:30.544 13:34:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.544 13:34:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:30.544 [2024-11-20 13:34:29.959273] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:30.544 BaseBdev1 00:14:30.544 13:34:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.544 13:34:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:14:30.544 13:34:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:14:30.544 13:34:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:30.544 13:34:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:30.544 13:34:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:30.544 13:34:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:30.544 13:34:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:30.544 13:34:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.544 13:34:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:30.544 13:34:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.544 13:34:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:30.544 13:34:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.544 13:34:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:30.544 [ 00:14:30.544 { 00:14:30.544 "name": "BaseBdev1", 00:14:30.544 "aliases": [ 00:14:30.544 "e057477e-e86f-4e91-ba8e-f42e43e9f853" 00:14:30.544 ], 00:14:30.544 "product_name": "Malloc disk", 00:14:30.544 "block_size": 512, 00:14:30.544 "num_blocks": 65536, 00:14:30.544 "uuid": "e057477e-e86f-4e91-ba8e-f42e43e9f853", 00:14:30.544 "assigned_rate_limits": { 00:14:30.544 "rw_ios_per_sec": 0, 00:14:30.544 "rw_mbytes_per_sec": 0, 00:14:30.544 "r_mbytes_per_sec": 0, 00:14:30.544 "w_mbytes_per_sec": 0 00:14:30.544 }, 00:14:30.544 "claimed": true, 00:14:30.544 "claim_type": "exclusive_write", 00:14:30.544 "zoned": false, 00:14:30.544 "supported_io_types": { 00:14:30.544 "read": true, 00:14:30.544 "write": true, 00:14:30.544 "unmap": true, 00:14:30.544 "flush": true, 00:14:30.544 "reset": true, 00:14:30.544 "nvme_admin": false, 00:14:30.544 "nvme_io": false, 00:14:30.544 "nvme_io_md": false, 00:14:30.544 "write_zeroes": true, 00:14:30.544 "zcopy": true, 00:14:30.544 "get_zone_info": false, 00:14:30.544 "zone_management": false, 00:14:30.544 "zone_append": false, 00:14:30.544 "compare": false, 00:14:30.544 "compare_and_write": false, 00:14:30.544 "abort": true, 00:14:30.544 "seek_hole": false, 00:14:30.544 "seek_data": false, 00:14:30.544 "copy": true, 00:14:30.544 "nvme_iov_md": false 00:14:30.544 }, 00:14:30.544 "memory_domains": [ 00:14:30.544 { 00:14:30.544 "dma_device_id": "system", 00:14:30.544 "dma_device_type": 1 00:14:30.544 }, 00:14:30.544 { 00:14:30.544 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:30.544 "dma_device_type": 2 00:14:30.544 } 00:14:30.544 ], 00:14:30.544 "driver_specific": {} 00:14:30.544 } 00:14:30.544 ] 00:14:30.544 13:34:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.544 13:34:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:30.544 13:34:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:14:30.544 13:34:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:30.544 13:34:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:30.544 13:34:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:30.544 13:34:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:30.544 13:34:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:30.544 13:34:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:30.544 13:34:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:30.544 13:34:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:30.544 13:34:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:30.544 13:34:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:30.544 13:34:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.544 13:34:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:30.544 13:34:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:30.803 13:34:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.803 13:34:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:30.803 "name": "Existed_Raid", 00:14:30.803 "uuid": "952e308a-fe1c-4115-a637-e615a64481db", 00:14:30.803 "strip_size_kb": 64, 00:14:30.803 "state": "configuring", 00:14:30.803 "raid_level": "concat", 00:14:30.803 "superblock": true, 00:14:30.803 "num_base_bdevs": 3, 00:14:30.803 "num_base_bdevs_discovered": 1, 00:14:30.803 "num_base_bdevs_operational": 3, 00:14:30.803 "base_bdevs_list": [ 00:14:30.803 { 00:14:30.803 "name": "BaseBdev1", 00:14:30.803 "uuid": "e057477e-e86f-4e91-ba8e-f42e43e9f853", 00:14:30.803 "is_configured": true, 00:14:30.803 "data_offset": 2048, 00:14:30.803 "data_size": 63488 00:14:30.803 }, 00:14:30.803 { 00:14:30.803 "name": "BaseBdev2", 00:14:30.803 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:30.803 "is_configured": false, 00:14:30.803 "data_offset": 0, 00:14:30.803 "data_size": 0 00:14:30.803 }, 00:14:30.803 { 00:14:30.803 "name": "BaseBdev3", 00:14:30.803 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:30.803 "is_configured": false, 00:14:30.803 "data_offset": 0, 00:14:30.803 "data_size": 0 00:14:30.803 } 00:14:30.803 ] 00:14:30.803 }' 00:14:30.803 13:34:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:30.803 13:34:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.063 13:34:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:31.063 13:34:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.063 13:34:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.063 [2024-11-20 13:34:30.386776] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:31.063 [2024-11-20 13:34:30.386848] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:14:31.063 13:34:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.063 13:34:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:31.063 13:34:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.063 13:34:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.063 [2024-11-20 13:34:30.398851] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:31.063 [2024-11-20 13:34:30.401124] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:31.063 [2024-11-20 13:34:30.401177] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:31.063 [2024-11-20 13:34:30.401189] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:31.063 [2024-11-20 13:34:30.401203] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:31.063 13:34:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.063 13:34:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:14:31.063 13:34:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:31.063 13:34:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:14:31.063 13:34:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:31.063 13:34:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:31.063 13:34:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:31.063 13:34:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:31.063 13:34:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:31.063 13:34:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:31.063 13:34:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:31.063 13:34:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:31.063 13:34:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:31.063 13:34:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:31.063 13:34:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:31.063 13:34:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.063 13:34:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.063 13:34:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.063 13:34:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:31.063 "name": "Existed_Raid", 00:14:31.063 "uuid": "81a294a1-c112-4906-8383-a1ebb2a20530", 00:14:31.063 "strip_size_kb": 64, 00:14:31.063 "state": "configuring", 00:14:31.063 "raid_level": "concat", 00:14:31.063 "superblock": true, 00:14:31.063 "num_base_bdevs": 3, 00:14:31.063 "num_base_bdevs_discovered": 1, 00:14:31.063 "num_base_bdevs_operational": 3, 00:14:31.063 "base_bdevs_list": [ 00:14:31.063 { 00:14:31.063 "name": "BaseBdev1", 00:14:31.063 "uuid": "e057477e-e86f-4e91-ba8e-f42e43e9f853", 00:14:31.063 "is_configured": true, 00:14:31.063 "data_offset": 2048, 00:14:31.063 "data_size": 63488 00:14:31.063 }, 00:14:31.063 { 00:14:31.063 "name": "BaseBdev2", 00:14:31.063 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:31.063 "is_configured": false, 00:14:31.063 "data_offset": 0, 00:14:31.063 "data_size": 0 00:14:31.063 }, 00:14:31.063 { 00:14:31.063 "name": "BaseBdev3", 00:14:31.063 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:31.063 "is_configured": false, 00:14:31.063 "data_offset": 0, 00:14:31.063 "data_size": 0 00:14:31.063 } 00:14:31.063 ] 00:14:31.063 }' 00:14:31.063 13:34:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:31.063 13:34:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.631 13:34:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:31.631 13:34:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.631 13:34:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.631 [2024-11-20 13:34:30.905030] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:31.631 BaseBdev2 00:14:31.631 13:34:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.631 13:34:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:14:31.631 13:34:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:14:31.631 13:34:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:31.631 13:34:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:31.631 13:34:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:31.631 13:34:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:31.631 13:34:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:31.631 13:34:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.631 13:34:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.631 13:34:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.631 13:34:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:31.631 13:34:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.631 13:34:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.631 [ 00:14:31.631 { 00:14:31.631 "name": "BaseBdev2", 00:14:31.631 "aliases": [ 00:14:31.631 "cffdb1ed-f915-4e6e-8762-710e1194d5d2" 00:14:31.631 ], 00:14:31.631 "product_name": "Malloc disk", 00:14:31.631 "block_size": 512, 00:14:31.631 "num_blocks": 65536, 00:14:31.631 "uuid": "cffdb1ed-f915-4e6e-8762-710e1194d5d2", 00:14:31.631 "assigned_rate_limits": { 00:14:31.631 "rw_ios_per_sec": 0, 00:14:31.631 "rw_mbytes_per_sec": 0, 00:14:31.631 "r_mbytes_per_sec": 0, 00:14:31.631 "w_mbytes_per_sec": 0 00:14:31.631 }, 00:14:31.631 "claimed": true, 00:14:31.631 "claim_type": "exclusive_write", 00:14:31.631 "zoned": false, 00:14:31.631 "supported_io_types": { 00:14:31.631 "read": true, 00:14:31.631 "write": true, 00:14:31.631 "unmap": true, 00:14:31.631 "flush": true, 00:14:31.631 "reset": true, 00:14:31.631 "nvme_admin": false, 00:14:31.631 "nvme_io": false, 00:14:31.631 "nvme_io_md": false, 00:14:31.631 "write_zeroes": true, 00:14:31.631 "zcopy": true, 00:14:31.631 "get_zone_info": false, 00:14:31.631 "zone_management": false, 00:14:31.631 "zone_append": false, 00:14:31.631 "compare": false, 00:14:31.631 "compare_and_write": false, 00:14:31.631 "abort": true, 00:14:31.631 "seek_hole": false, 00:14:31.631 "seek_data": false, 00:14:31.631 "copy": true, 00:14:31.631 "nvme_iov_md": false 00:14:31.631 }, 00:14:31.631 "memory_domains": [ 00:14:31.631 { 00:14:31.631 "dma_device_id": "system", 00:14:31.631 "dma_device_type": 1 00:14:31.631 }, 00:14:31.631 { 00:14:31.631 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:31.631 "dma_device_type": 2 00:14:31.631 } 00:14:31.631 ], 00:14:31.631 "driver_specific": {} 00:14:31.631 } 00:14:31.631 ] 00:14:31.631 13:34:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.631 13:34:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:31.631 13:34:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:31.631 13:34:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:31.631 13:34:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:14:31.631 13:34:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:31.631 13:34:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:31.631 13:34:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:31.631 13:34:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:31.631 13:34:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:31.631 13:34:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:31.631 13:34:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:31.631 13:34:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:31.631 13:34:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:31.631 13:34:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:31.631 13:34:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.631 13:34:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:31.631 13:34:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.631 13:34:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.631 13:34:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:31.631 "name": "Existed_Raid", 00:14:31.631 "uuid": "81a294a1-c112-4906-8383-a1ebb2a20530", 00:14:31.631 "strip_size_kb": 64, 00:14:31.632 "state": "configuring", 00:14:31.632 "raid_level": "concat", 00:14:31.632 "superblock": true, 00:14:31.632 "num_base_bdevs": 3, 00:14:31.632 "num_base_bdevs_discovered": 2, 00:14:31.632 "num_base_bdevs_operational": 3, 00:14:31.632 "base_bdevs_list": [ 00:14:31.632 { 00:14:31.632 "name": "BaseBdev1", 00:14:31.632 "uuid": "e057477e-e86f-4e91-ba8e-f42e43e9f853", 00:14:31.632 "is_configured": true, 00:14:31.632 "data_offset": 2048, 00:14:31.632 "data_size": 63488 00:14:31.632 }, 00:14:31.632 { 00:14:31.632 "name": "BaseBdev2", 00:14:31.632 "uuid": "cffdb1ed-f915-4e6e-8762-710e1194d5d2", 00:14:31.632 "is_configured": true, 00:14:31.632 "data_offset": 2048, 00:14:31.632 "data_size": 63488 00:14:31.632 }, 00:14:31.632 { 00:14:31.632 "name": "BaseBdev3", 00:14:31.632 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:31.632 "is_configured": false, 00:14:31.632 "data_offset": 0, 00:14:31.632 "data_size": 0 00:14:31.632 } 00:14:31.632 ] 00:14:31.632 }' 00:14:31.632 13:34:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:31.632 13:34:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.889 13:34:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:31.889 13:34:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.889 13:34:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:32.148 [2024-11-20 13:34:31.387333] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:32.148 [2024-11-20 13:34:31.387614] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:14:32.148 [2024-11-20 13:34:31.387638] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:14:32.148 [2024-11-20 13:34:31.387933] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:14:32.148 BaseBdev3 00:14:32.148 [2024-11-20 13:34:31.388117] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:14:32.148 [2024-11-20 13:34:31.388129] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:14:32.148 [2024-11-20 13:34:31.388276] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:32.148 13:34:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.148 13:34:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:14:32.148 13:34:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:14:32.148 13:34:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:32.148 13:34:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:32.148 13:34:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:32.148 13:34:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:32.148 13:34:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:32.148 13:34:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.148 13:34:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:32.148 13:34:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.148 13:34:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:32.148 13:34:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.148 13:34:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:32.148 [ 00:14:32.148 { 00:14:32.148 "name": "BaseBdev3", 00:14:32.148 "aliases": [ 00:14:32.148 "a25dd9d2-fea7-4982-bef3-231527194d1d" 00:14:32.148 ], 00:14:32.148 "product_name": "Malloc disk", 00:14:32.148 "block_size": 512, 00:14:32.148 "num_blocks": 65536, 00:14:32.148 "uuid": "a25dd9d2-fea7-4982-bef3-231527194d1d", 00:14:32.148 "assigned_rate_limits": { 00:14:32.148 "rw_ios_per_sec": 0, 00:14:32.148 "rw_mbytes_per_sec": 0, 00:14:32.148 "r_mbytes_per_sec": 0, 00:14:32.148 "w_mbytes_per_sec": 0 00:14:32.148 }, 00:14:32.148 "claimed": true, 00:14:32.148 "claim_type": "exclusive_write", 00:14:32.148 "zoned": false, 00:14:32.148 "supported_io_types": { 00:14:32.148 "read": true, 00:14:32.148 "write": true, 00:14:32.148 "unmap": true, 00:14:32.148 "flush": true, 00:14:32.148 "reset": true, 00:14:32.148 "nvme_admin": false, 00:14:32.148 "nvme_io": false, 00:14:32.148 "nvme_io_md": false, 00:14:32.148 "write_zeroes": true, 00:14:32.148 "zcopy": true, 00:14:32.148 "get_zone_info": false, 00:14:32.148 "zone_management": false, 00:14:32.148 "zone_append": false, 00:14:32.148 "compare": false, 00:14:32.148 "compare_and_write": false, 00:14:32.148 "abort": true, 00:14:32.148 "seek_hole": false, 00:14:32.148 "seek_data": false, 00:14:32.148 "copy": true, 00:14:32.148 "nvme_iov_md": false 00:14:32.148 }, 00:14:32.148 "memory_domains": [ 00:14:32.148 { 00:14:32.148 "dma_device_id": "system", 00:14:32.148 "dma_device_type": 1 00:14:32.148 }, 00:14:32.148 { 00:14:32.148 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:32.148 "dma_device_type": 2 00:14:32.148 } 00:14:32.148 ], 00:14:32.148 "driver_specific": {} 00:14:32.148 } 00:14:32.148 ] 00:14:32.148 13:34:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.148 13:34:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:32.148 13:34:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:32.148 13:34:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:32.148 13:34:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:14:32.148 13:34:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:32.148 13:34:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:32.148 13:34:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:32.148 13:34:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:32.148 13:34:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:32.148 13:34:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:32.148 13:34:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:32.148 13:34:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:32.148 13:34:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:32.148 13:34:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:32.148 13:34:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:32.148 13:34:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.148 13:34:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:32.148 13:34:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.149 13:34:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:32.149 "name": "Existed_Raid", 00:14:32.149 "uuid": "81a294a1-c112-4906-8383-a1ebb2a20530", 00:14:32.149 "strip_size_kb": 64, 00:14:32.149 "state": "online", 00:14:32.149 "raid_level": "concat", 00:14:32.149 "superblock": true, 00:14:32.149 "num_base_bdevs": 3, 00:14:32.149 "num_base_bdevs_discovered": 3, 00:14:32.149 "num_base_bdevs_operational": 3, 00:14:32.149 "base_bdevs_list": [ 00:14:32.149 { 00:14:32.149 "name": "BaseBdev1", 00:14:32.149 "uuid": "e057477e-e86f-4e91-ba8e-f42e43e9f853", 00:14:32.149 "is_configured": true, 00:14:32.149 "data_offset": 2048, 00:14:32.149 "data_size": 63488 00:14:32.149 }, 00:14:32.149 { 00:14:32.149 "name": "BaseBdev2", 00:14:32.149 "uuid": "cffdb1ed-f915-4e6e-8762-710e1194d5d2", 00:14:32.149 "is_configured": true, 00:14:32.149 "data_offset": 2048, 00:14:32.149 "data_size": 63488 00:14:32.149 }, 00:14:32.149 { 00:14:32.149 "name": "BaseBdev3", 00:14:32.149 "uuid": "a25dd9d2-fea7-4982-bef3-231527194d1d", 00:14:32.149 "is_configured": true, 00:14:32.149 "data_offset": 2048, 00:14:32.149 "data_size": 63488 00:14:32.149 } 00:14:32.149 ] 00:14:32.149 }' 00:14:32.149 13:34:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:32.149 13:34:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:32.407 13:34:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:14:32.407 13:34:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:32.407 13:34:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:32.407 13:34:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:32.407 13:34:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:14:32.407 13:34:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:32.407 13:34:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:32.407 13:34:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:32.407 13:34:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.407 13:34:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:32.407 [2024-11-20 13:34:31.875514] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:32.666 13:34:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.666 13:34:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:32.666 "name": "Existed_Raid", 00:14:32.666 "aliases": [ 00:14:32.666 "81a294a1-c112-4906-8383-a1ebb2a20530" 00:14:32.666 ], 00:14:32.666 "product_name": "Raid Volume", 00:14:32.666 "block_size": 512, 00:14:32.666 "num_blocks": 190464, 00:14:32.666 "uuid": "81a294a1-c112-4906-8383-a1ebb2a20530", 00:14:32.666 "assigned_rate_limits": { 00:14:32.666 "rw_ios_per_sec": 0, 00:14:32.666 "rw_mbytes_per_sec": 0, 00:14:32.666 "r_mbytes_per_sec": 0, 00:14:32.666 "w_mbytes_per_sec": 0 00:14:32.666 }, 00:14:32.666 "claimed": false, 00:14:32.666 "zoned": false, 00:14:32.666 "supported_io_types": { 00:14:32.666 "read": true, 00:14:32.666 "write": true, 00:14:32.666 "unmap": true, 00:14:32.666 "flush": true, 00:14:32.666 "reset": true, 00:14:32.666 "nvme_admin": false, 00:14:32.666 "nvme_io": false, 00:14:32.666 "nvme_io_md": false, 00:14:32.666 "write_zeroes": true, 00:14:32.666 "zcopy": false, 00:14:32.666 "get_zone_info": false, 00:14:32.666 "zone_management": false, 00:14:32.666 "zone_append": false, 00:14:32.666 "compare": false, 00:14:32.666 "compare_and_write": false, 00:14:32.666 "abort": false, 00:14:32.666 "seek_hole": false, 00:14:32.666 "seek_data": false, 00:14:32.666 "copy": false, 00:14:32.666 "nvme_iov_md": false 00:14:32.666 }, 00:14:32.666 "memory_domains": [ 00:14:32.666 { 00:14:32.666 "dma_device_id": "system", 00:14:32.666 "dma_device_type": 1 00:14:32.666 }, 00:14:32.666 { 00:14:32.666 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:32.666 "dma_device_type": 2 00:14:32.666 }, 00:14:32.666 { 00:14:32.666 "dma_device_id": "system", 00:14:32.666 "dma_device_type": 1 00:14:32.666 }, 00:14:32.666 { 00:14:32.666 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:32.666 "dma_device_type": 2 00:14:32.666 }, 00:14:32.666 { 00:14:32.666 "dma_device_id": "system", 00:14:32.666 "dma_device_type": 1 00:14:32.666 }, 00:14:32.666 { 00:14:32.666 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:32.666 "dma_device_type": 2 00:14:32.666 } 00:14:32.666 ], 00:14:32.666 "driver_specific": { 00:14:32.666 "raid": { 00:14:32.666 "uuid": "81a294a1-c112-4906-8383-a1ebb2a20530", 00:14:32.666 "strip_size_kb": 64, 00:14:32.666 "state": "online", 00:14:32.666 "raid_level": "concat", 00:14:32.666 "superblock": true, 00:14:32.666 "num_base_bdevs": 3, 00:14:32.666 "num_base_bdevs_discovered": 3, 00:14:32.666 "num_base_bdevs_operational": 3, 00:14:32.666 "base_bdevs_list": [ 00:14:32.666 { 00:14:32.666 "name": "BaseBdev1", 00:14:32.666 "uuid": "e057477e-e86f-4e91-ba8e-f42e43e9f853", 00:14:32.666 "is_configured": true, 00:14:32.666 "data_offset": 2048, 00:14:32.666 "data_size": 63488 00:14:32.666 }, 00:14:32.666 { 00:14:32.666 "name": "BaseBdev2", 00:14:32.666 "uuid": "cffdb1ed-f915-4e6e-8762-710e1194d5d2", 00:14:32.667 "is_configured": true, 00:14:32.667 "data_offset": 2048, 00:14:32.667 "data_size": 63488 00:14:32.667 }, 00:14:32.667 { 00:14:32.667 "name": "BaseBdev3", 00:14:32.667 "uuid": "a25dd9d2-fea7-4982-bef3-231527194d1d", 00:14:32.667 "is_configured": true, 00:14:32.667 "data_offset": 2048, 00:14:32.667 "data_size": 63488 00:14:32.667 } 00:14:32.667 ] 00:14:32.667 } 00:14:32.667 } 00:14:32.667 }' 00:14:32.667 13:34:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:32.667 13:34:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:14:32.667 BaseBdev2 00:14:32.667 BaseBdev3' 00:14:32.667 13:34:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:32.667 13:34:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:32.667 13:34:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:32.667 13:34:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:14:32.667 13:34:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.667 13:34:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:32.667 13:34:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:32.667 13:34:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.667 13:34:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:32.667 13:34:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:32.667 13:34:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:32.667 13:34:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:32.667 13:34:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:32.667 13:34:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.667 13:34:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:32.667 13:34:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.667 13:34:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:32.667 13:34:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:32.667 13:34:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:32.667 13:34:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:32.667 13:34:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:32.667 13:34:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.667 13:34:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:32.667 13:34:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.667 13:34:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:32.667 13:34:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:32.667 13:34:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:32.667 13:34:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.667 13:34:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:32.667 [2024-11-20 13:34:32.134960] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:32.667 [2024-11-20 13:34:32.134996] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:32.667 [2024-11-20 13:34:32.135053] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:32.925 13:34:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.925 13:34:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:14:32.925 13:34:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:14:32.925 13:34:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:32.925 13:34:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:14:32.925 13:34:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:14:32.925 13:34:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:14:32.925 13:34:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:32.925 13:34:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:14:32.925 13:34:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:32.925 13:34:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:32.925 13:34:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:32.925 13:34:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:32.925 13:34:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:32.925 13:34:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:32.926 13:34:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:32.926 13:34:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:32.926 13:34:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:32.926 13:34:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.926 13:34:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:32.926 13:34:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.926 13:34:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:32.926 "name": "Existed_Raid", 00:14:32.926 "uuid": "81a294a1-c112-4906-8383-a1ebb2a20530", 00:14:32.926 "strip_size_kb": 64, 00:14:32.926 "state": "offline", 00:14:32.926 "raid_level": "concat", 00:14:32.926 "superblock": true, 00:14:32.926 "num_base_bdevs": 3, 00:14:32.926 "num_base_bdevs_discovered": 2, 00:14:32.926 "num_base_bdevs_operational": 2, 00:14:32.926 "base_bdevs_list": [ 00:14:32.926 { 00:14:32.926 "name": null, 00:14:32.926 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:32.926 "is_configured": false, 00:14:32.926 "data_offset": 0, 00:14:32.926 "data_size": 63488 00:14:32.926 }, 00:14:32.926 { 00:14:32.926 "name": "BaseBdev2", 00:14:32.926 "uuid": "cffdb1ed-f915-4e6e-8762-710e1194d5d2", 00:14:32.926 "is_configured": true, 00:14:32.926 "data_offset": 2048, 00:14:32.926 "data_size": 63488 00:14:32.926 }, 00:14:32.926 { 00:14:32.926 "name": "BaseBdev3", 00:14:32.926 "uuid": "a25dd9d2-fea7-4982-bef3-231527194d1d", 00:14:32.926 "is_configured": true, 00:14:32.926 "data_offset": 2048, 00:14:32.926 "data_size": 63488 00:14:32.926 } 00:14:32.926 ] 00:14:32.926 }' 00:14:32.926 13:34:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:32.926 13:34:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:33.183 13:34:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:14:33.183 13:34:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:33.183 13:34:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:33.183 13:34:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:33.183 13:34:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.183 13:34:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:33.442 13:34:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.442 13:34:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:33.442 13:34:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:33.442 13:34:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:14:33.442 13:34:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.442 13:34:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:33.442 [2024-11-20 13:34:32.686420] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:33.442 13:34:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.442 13:34:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:33.442 13:34:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:33.442 13:34:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:33.442 13:34:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:33.442 13:34:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.442 13:34:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:33.442 13:34:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.442 13:34:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:33.442 13:34:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:33.442 13:34:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:14:33.442 13:34:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.442 13:34:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:33.442 [2024-11-20 13:34:32.831040] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:33.442 [2024-11-20 13:34:32.831125] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:14:33.701 13:34:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.701 13:34:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:33.701 13:34:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:33.701 13:34:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:14:33.701 13:34:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:33.701 13:34:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.701 13:34:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:33.701 13:34:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.701 13:34:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:14:33.701 13:34:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:14:33.701 13:34:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:14:33.701 13:34:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:14:33.701 13:34:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:33.701 13:34:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:33.701 13:34:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.701 13:34:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:33.701 BaseBdev2 00:14:33.701 13:34:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.701 13:34:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:14:33.701 13:34:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:14:33.701 13:34:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:33.701 13:34:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:33.701 13:34:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:33.701 13:34:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:33.701 13:34:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:33.701 13:34:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.701 13:34:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:33.701 13:34:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.701 13:34:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:33.701 13:34:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.701 13:34:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:33.701 [ 00:14:33.701 { 00:14:33.701 "name": "BaseBdev2", 00:14:33.701 "aliases": [ 00:14:33.701 "f83ed9ed-c960-4de2-aad9-317c5d0a7b3a" 00:14:33.701 ], 00:14:33.701 "product_name": "Malloc disk", 00:14:33.701 "block_size": 512, 00:14:33.701 "num_blocks": 65536, 00:14:33.701 "uuid": "f83ed9ed-c960-4de2-aad9-317c5d0a7b3a", 00:14:33.701 "assigned_rate_limits": { 00:14:33.701 "rw_ios_per_sec": 0, 00:14:33.701 "rw_mbytes_per_sec": 0, 00:14:33.701 "r_mbytes_per_sec": 0, 00:14:33.701 "w_mbytes_per_sec": 0 00:14:33.701 }, 00:14:33.701 "claimed": false, 00:14:33.701 "zoned": false, 00:14:33.701 "supported_io_types": { 00:14:33.701 "read": true, 00:14:33.701 "write": true, 00:14:33.701 "unmap": true, 00:14:33.701 "flush": true, 00:14:33.701 "reset": true, 00:14:33.701 "nvme_admin": false, 00:14:33.701 "nvme_io": false, 00:14:33.701 "nvme_io_md": false, 00:14:33.701 "write_zeroes": true, 00:14:33.701 "zcopy": true, 00:14:33.701 "get_zone_info": false, 00:14:33.701 "zone_management": false, 00:14:33.701 "zone_append": false, 00:14:33.701 "compare": false, 00:14:33.701 "compare_and_write": false, 00:14:33.701 "abort": true, 00:14:33.701 "seek_hole": false, 00:14:33.701 "seek_data": false, 00:14:33.701 "copy": true, 00:14:33.701 "nvme_iov_md": false 00:14:33.701 }, 00:14:33.701 "memory_domains": [ 00:14:33.701 { 00:14:33.701 "dma_device_id": "system", 00:14:33.701 "dma_device_type": 1 00:14:33.701 }, 00:14:33.701 { 00:14:33.701 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:33.701 "dma_device_type": 2 00:14:33.701 } 00:14:33.701 ], 00:14:33.701 "driver_specific": {} 00:14:33.701 } 00:14:33.701 ] 00:14:33.701 13:34:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.701 13:34:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:33.701 13:34:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:33.701 13:34:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:33.701 13:34:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:33.701 13:34:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.701 13:34:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:33.701 BaseBdev3 00:14:33.701 13:34:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.701 13:34:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:14:33.701 13:34:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:14:33.701 13:34:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:33.701 13:34:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:33.702 13:34:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:33.702 13:34:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:33.702 13:34:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:33.702 13:34:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.702 13:34:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:33.702 13:34:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.702 13:34:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:33.702 13:34:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.702 13:34:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:33.702 [ 00:14:33.702 { 00:14:33.702 "name": "BaseBdev3", 00:14:33.702 "aliases": [ 00:14:33.702 "f1d087bf-b3b3-4784-b015-b4a9c5d2f49d" 00:14:33.702 ], 00:14:33.702 "product_name": "Malloc disk", 00:14:33.702 "block_size": 512, 00:14:33.702 "num_blocks": 65536, 00:14:33.702 "uuid": "f1d087bf-b3b3-4784-b015-b4a9c5d2f49d", 00:14:33.702 "assigned_rate_limits": { 00:14:33.702 "rw_ios_per_sec": 0, 00:14:33.702 "rw_mbytes_per_sec": 0, 00:14:33.702 "r_mbytes_per_sec": 0, 00:14:33.702 "w_mbytes_per_sec": 0 00:14:33.702 }, 00:14:33.702 "claimed": false, 00:14:33.702 "zoned": false, 00:14:33.702 "supported_io_types": { 00:14:33.702 "read": true, 00:14:33.702 "write": true, 00:14:33.702 "unmap": true, 00:14:33.702 "flush": true, 00:14:33.702 "reset": true, 00:14:33.702 "nvme_admin": false, 00:14:33.702 "nvme_io": false, 00:14:33.702 "nvme_io_md": false, 00:14:33.702 "write_zeroes": true, 00:14:33.702 "zcopy": true, 00:14:33.702 "get_zone_info": false, 00:14:33.702 "zone_management": false, 00:14:33.702 "zone_append": false, 00:14:33.702 "compare": false, 00:14:33.702 "compare_and_write": false, 00:14:33.702 "abort": true, 00:14:33.702 "seek_hole": false, 00:14:33.702 "seek_data": false, 00:14:33.702 "copy": true, 00:14:33.702 "nvme_iov_md": false 00:14:33.702 }, 00:14:33.702 "memory_domains": [ 00:14:33.702 { 00:14:33.702 "dma_device_id": "system", 00:14:33.702 "dma_device_type": 1 00:14:33.702 }, 00:14:33.702 { 00:14:33.702 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:33.702 "dma_device_type": 2 00:14:33.702 } 00:14:33.702 ], 00:14:33.702 "driver_specific": {} 00:14:33.702 } 00:14:33.702 ] 00:14:33.702 13:34:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.702 13:34:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:33.702 13:34:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:33.702 13:34:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:33.702 13:34:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:33.702 13:34:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.702 13:34:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:33.702 [2024-11-20 13:34:33.162353] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:33.702 [2024-11-20 13:34:33.162409] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:33.702 [2024-11-20 13:34:33.162439] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:33.702 [2024-11-20 13:34:33.164533] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:33.702 13:34:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.702 13:34:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:14:33.702 13:34:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:33.702 13:34:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:33.702 13:34:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:33.702 13:34:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:33.702 13:34:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:33.702 13:34:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:33.702 13:34:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:33.702 13:34:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:33.702 13:34:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:33.702 13:34:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:33.702 13:34:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.702 13:34:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:33.702 13:34:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:33.960 13:34:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.960 13:34:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:33.960 "name": "Existed_Raid", 00:14:33.960 "uuid": "8931013c-57ee-4df2-af41-ba1ad33a01d2", 00:14:33.960 "strip_size_kb": 64, 00:14:33.960 "state": "configuring", 00:14:33.960 "raid_level": "concat", 00:14:33.960 "superblock": true, 00:14:33.960 "num_base_bdevs": 3, 00:14:33.960 "num_base_bdevs_discovered": 2, 00:14:33.960 "num_base_bdevs_operational": 3, 00:14:33.960 "base_bdevs_list": [ 00:14:33.960 { 00:14:33.960 "name": "BaseBdev1", 00:14:33.960 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:33.960 "is_configured": false, 00:14:33.960 "data_offset": 0, 00:14:33.960 "data_size": 0 00:14:33.960 }, 00:14:33.960 { 00:14:33.960 "name": "BaseBdev2", 00:14:33.960 "uuid": "f83ed9ed-c960-4de2-aad9-317c5d0a7b3a", 00:14:33.960 "is_configured": true, 00:14:33.960 "data_offset": 2048, 00:14:33.960 "data_size": 63488 00:14:33.960 }, 00:14:33.960 { 00:14:33.960 "name": "BaseBdev3", 00:14:33.960 "uuid": "f1d087bf-b3b3-4784-b015-b4a9c5d2f49d", 00:14:33.960 "is_configured": true, 00:14:33.960 "data_offset": 2048, 00:14:33.960 "data_size": 63488 00:14:33.960 } 00:14:33.960 ] 00:14:33.960 }' 00:14:33.960 13:34:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:33.960 13:34:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:34.219 13:34:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:14:34.219 13:34:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.219 13:34:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:34.220 [2024-11-20 13:34:33.598233] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:34.220 13:34:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.220 13:34:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:14:34.220 13:34:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:34.220 13:34:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:34.220 13:34:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:34.220 13:34:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:34.220 13:34:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:34.220 13:34:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:34.220 13:34:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:34.220 13:34:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:34.220 13:34:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:34.220 13:34:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:34.220 13:34:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.220 13:34:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:34.220 13:34:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:34.220 13:34:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.220 13:34:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:34.220 "name": "Existed_Raid", 00:14:34.220 "uuid": "8931013c-57ee-4df2-af41-ba1ad33a01d2", 00:14:34.220 "strip_size_kb": 64, 00:14:34.220 "state": "configuring", 00:14:34.220 "raid_level": "concat", 00:14:34.220 "superblock": true, 00:14:34.220 "num_base_bdevs": 3, 00:14:34.220 "num_base_bdevs_discovered": 1, 00:14:34.220 "num_base_bdevs_operational": 3, 00:14:34.220 "base_bdevs_list": [ 00:14:34.220 { 00:14:34.220 "name": "BaseBdev1", 00:14:34.220 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:34.220 "is_configured": false, 00:14:34.220 "data_offset": 0, 00:14:34.220 "data_size": 0 00:14:34.220 }, 00:14:34.220 { 00:14:34.220 "name": null, 00:14:34.220 "uuid": "f83ed9ed-c960-4de2-aad9-317c5d0a7b3a", 00:14:34.220 "is_configured": false, 00:14:34.220 "data_offset": 0, 00:14:34.220 "data_size": 63488 00:14:34.220 }, 00:14:34.220 { 00:14:34.220 "name": "BaseBdev3", 00:14:34.220 "uuid": "f1d087bf-b3b3-4784-b015-b4a9c5d2f49d", 00:14:34.220 "is_configured": true, 00:14:34.220 "data_offset": 2048, 00:14:34.220 "data_size": 63488 00:14:34.220 } 00:14:34.220 ] 00:14:34.220 }' 00:14:34.220 13:34:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:34.220 13:34:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:34.794 13:34:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:34.794 13:34:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:34.794 13:34:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.794 13:34:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:34.794 13:34:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.794 13:34:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:14:34.794 13:34:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:34.794 13:34:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.794 13:34:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:34.794 [2024-11-20 13:34:34.063484] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:34.794 BaseBdev1 00:14:34.794 13:34:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.794 13:34:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:14:34.794 13:34:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:14:34.794 13:34:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:34.794 13:34:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:34.794 13:34:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:34.794 13:34:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:34.794 13:34:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:34.794 13:34:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.794 13:34:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:34.794 13:34:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.794 13:34:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:34.794 13:34:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.794 13:34:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:34.794 [ 00:14:34.794 { 00:14:34.794 "name": "BaseBdev1", 00:14:34.794 "aliases": [ 00:14:34.794 "d101eaa4-e9de-4310-bb8e-ac85bce5aedc" 00:14:34.794 ], 00:14:34.794 "product_name": "Malloc disk", 00:14:34.794 "block_size": 512, 00:14:34.794 "num_blocks": 65536, 00:14:34.794 "uuid": "d101eaa4-e9de-4310-bb8e-ac85bce5aedc", 00:14:34.794 "assigned_rate_limits": { 00:14:34.794 "rw_ios_per_sec": 0, 00:14:34.794 "rw_mbytes_per_sec": 0, 00:14:34.794 "r_mbytes_per_sec": 0, 00:14:34.794 "w_mbytes_per_sec": 0 00:14:34.794 }, 00:14:34.794 "claimed": true, 00:14:34.794 "claim_type": "exclusive_write", 00:14:34.794 "zoned": false, 00:14:34.794 "supported_io_types": { 00:14:34.794 "read": true, 00:14:34.794 "write": true, 00:14:34.794 "unmap": true, 00:14:34.794 "flush": true, 00:14:34.794 "reset": true, 00:14:34.794 "nvme_admin": false, 00:14:34.794 "nvme_io": false, 00:14:34.794 "nvme_io_md": false, 00:14:34.794 "write_zeroes": true, 00:14:34.794 "zcopy": true, 00:14:34.794 "get_zone_info": false, 00:14:34.794 "zone_management": false, 00:14:34.794 "zone_append": false, 00:14:34.794 "compare": false, 00:14:34.794 "compare_and_write": false, 00:14:34.794 "abort": true, 00:14:34.794 "seek_hole": false, 00:14:34.794 "seek_data": false, 00:14:34.794 "copy": true, 00:14:34.794 "nvme_iov_md": false 00:14:34.794 }, 00:14:34.794 "memory_domains": [ 00:14:34.794 { 00:14:34.794 "dma_device_id": "system", 00:14:34.794 "dma_device_type": 1 00:14:34.794 }, 00:14:34.794 { 00:14:34.794 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:34.794 "dma_device_type": 2 00:14:34.794 } 00:14:34.794 ], 00:14:34.794 "driver_specific": {} 00:14:34.794 } 00:14:34.794 ] 00:14:34.794 13:34:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.794 13:34:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:34.794 13:34:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:14:34.795 13:34:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:34.795 13:34:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:34.795 13:34:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:34.795 13:34:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:34.795 13:34:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:34.795 13:34:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:34.795 13:34:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:34.795 13:34:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:34.795 13:34:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:34.795 13:34:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:34.795 13:34:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.795 13:34:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:34.795 13:34:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:34.795 13:34:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.795 13:34:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:34.795 "name": "Existed_Raid", 00:14:34.795 "uuid": "8931013c-57ee-4df2-af41-ba1ad33a01d2", 00:14:34.795 "strip_size_kb": 64, 00:14:34.795 "state": "configuring", 00:14:34.795 "raid_level": "concat", 00:14:34.795 "superblock": true, 00:14:34.795 "num_base_bdevs": 3, 00:14:34.795 "num_base_bdevs_discovered": 2, 00:14:34.795 "num_base_bdevs_operational": 3, 00:14:34.795 "base_bdevs_list": [ 00:14:34.795 { 00:14:34.795 "name": "BaseBdev1", 00:14:34.795 "uuid": "d101eaa4-e9de-4310-bb8e-ac85bce5aedc", 00:14:34.795 "is_configured": true, 00:14:34.795 "data_offset": 2048, 00:14:34.795 "data_size": 63488 00:14:34.795 }, 00:14:34.795 { 00:14:34.795 "name": null, 00:14:34.795 "uuid": "f83ed9ed-c960-4de2-aad9-317c5d0a7b3a", 00:14:34.795 "is_configured": false, 00:14:34.795 "data_offset": 0, 00:14:34.795 "data_size": 63488 00:14:34.795 }, 00:14:34.795 { 00:14:34.795 "name": "BaseBdev3", 00:14:34.795 "uuid": "f1d087bf-b3b3-4784-b015-b4a9c5d2f49d", 00:14:34.795 "is_configured": true, 00:14:34.795 "data_offset": 2048, 00:14:34.795 "data_size": 63488 00:14:34.795 } 00:14:34.795 ] 00:14:34.795 }' 00:14:34.795 13:34:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:34.795 13:34:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:35.054 13:34:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:35.054 13:34:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:35.054 13:34:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.054 13:34:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:35.054 13:34:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.054 13:34:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:14:35.054 13:34:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:14:35.054 13:34:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.054 13:34:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:35.054 [2024-11-20 13:34:34.538905] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:35.313 13:34:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.313 13:34:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:14:35.313 13:34:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:35.313 13:34:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:35.314 13:34:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:35.314 13:34:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:35.314 13:34:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:35.314 13:34:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:35.314 13:34:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:35.314 13:34:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:35.314 13:34:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:35.314 13:34:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:35.314 13:34:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:35.314 13:34:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.314 13:34:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:35.314 13:34:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.314 13:34:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:35.314 "name": "Existed_Raid", 00:14:35.314 "uuid": "8931013c-57ee-4df2-af41-ba1ad33a01d2", 00:14:35.314 "strip_size_kb": 64, 00:14:35.314 "state": "configuring", 00:14:35.314 "raid_level": "concat", 00:14:35.314 "superblock": true, 00:14:35.314 "num_base_bdevs": 3, 00:14:35.314 "num_base_bdevs_discovered": 1, 00:14:35.314 "num_base_bdevs_operational": 3, 00:14:35.314 "base_bdevs_list": [ 00:14:35.314 { 00:14:35.314 "name": "BaseBdev1", 00:14:35.314 "uuid": "d101eaa4-e9de-4310-bb8e-ac85bce5aedc", 00:14:35.314 "is_configured": true, 00:14:35.314 "data_offset": 2048, 00:14:35.314 "data_size": 63488 00:14:35.314 }, 00:14:35.314 { 00:14:35.314 "name": null, 00:14:35.314 "uuid": "f83ed9ed-c960-4de2-aad9-317c5d0a7b3a", 00:14:35.314 "is_configured": false, 00:14:35.314 "data_offset": 0, 00:14:35.314 "data_size": 63488 00:14:35.314 }, 00:14:35.314 { 00:14:35.314 "name": null, 00:14:35.314 "uuid": "f1d087bf-b3b3-4784-b015-b4a9c5d2f49d", 00:14:35.314 "is_configured": false, 00:14:35.314 "data_offset": 0, 00:14:35.314 "data_size": 63488 00:14:35.314 } 00:14:35.314 ] 00:14:35.314 }' 00:14:35.314 13:34:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:35.314 13:34:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:35.573 13:34:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:35.573 13:34:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:35.573 13:34:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.573 13:34:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:35.573 13:34:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.573 13:34:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:14:35.573 13:34:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:14:35.573 13:34:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.573 13:34:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:35.573 [2024-11-20 13:34:35.014468] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:35.573 13:34:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.573 13:34:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:14:35.573 13:34:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:35.573 13:34:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:35.573 13:34:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:35.573 13:34:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:35.573 13:34:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:35.573 13:34:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:35.573 13:34:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:35.573 13:34:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:35.573 13:34:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:35.573 13:34:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:35.573 13:34:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:35.573 13:34:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.573 13:34:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:35.573 13:34:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.832 13:34:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:35.832 "name": "Existed_Raid", 00:14:35.832 "uuid": "8931013c-57ee-4df2-af41-ba1ad33a01d2", 00:14:35.832 "strip_size_kb": 64, 00:14:35.832 "state": "configuring", 00:14:35.832 "raid_level": "concat", 00:14:35.832 "superblock": true, 00:14:35.832 "num_base_bdevs": 3, 00:14:35.832 "num_base_bdevs_discovered": 2, 00:14:35.832 "num_base_bdevs_operational": 3, 00:14:35.832 "base_bdevs_list": [ 00:14:35.832 { 00:14:35.832 "name": "BaseBdev1", 00:14:35.832 "uuid": "d101eaa4-e9de-4310-bb8e-ac85bce5aedc", 00:14:35.832 "is_configured": true, 00:14:35.832 "data_offset": 2048, 00:14:35.832 "data_size": 63488 00:14:35.832 }, 00:14:35.832 { 00:14:35.832 "name": null, 00:14:35.832 "uuid": "f83ed9ed-c960-4de2-aad9-317c5d0a7b3a", 00:14:35.832 "is_configured": false, 00:14:35.832 "data_offset": 0, 00:14:35.832 "data_size": 63488 00:14:35.832 }, 00:14:35.832 { 00:14:35.832 "name": "BaseBdev3", 00:14:35.832 "uuid": "f1d087bf-b3b3-4784-b015-b4a9c5d2f49d", 00:14:35.832 "is_configured": true, 00:14:35.832 "data_offset": 2048, 00:14:35.832 "data_size": 63488 00:14:35.832 } 00:14:35.832 ] 00:14:35.832 }' 00:14:35.832 13:34:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:35.832 13:34:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:36.091 13:34:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:36.091 13:34:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.091 13:34:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:36.091 13:34:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:36.091 13:34:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.091 13:34:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:14:36.091 13:34:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:36.091 13:34:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.091 13:34:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:36.091 [2024-11-20 13:34:35.494474] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:36.350 13:34:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.350 13:34:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:14:36.350 13:34:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:36.350 13:34:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:36.350 13:34:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:36.350 13:34:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:36.350 13:34:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:36.350 13:34:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:36.350 13:34:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:36.350 13:34:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:36.350 13:34:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:36.350 13:34:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:36.350 13:34:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.350 13:34:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:36.350 13:34:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:36.350 13:34:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.350 13:34:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:36.350 "name": "Existed_Raid", 00:14:36.350 "uuid": "8931013c-57ee-4df2-af41-ba1ad33a01d2", 00:14:36.350 "strip_size_kb": 64, 00:14:36.350 "state": "configuring", 00:14:36.350 "raid_level": "concat", 00:14:36.350 "superblock": true, 00:14:36.350 "num_base_bdevs": 3, 00:14:36.350 "num_base_bdevs_discovered": 1, 00:14:36.350 "num_base_bdevs_operational": 3, 00:14:36.350 "base_bdevs_list": [ 00:14:36.350 { 00:14:36.350 "name": null, 00:14:36.350 "uuid": "d101eaa4-e9de-4310-bb8e-ac85bce5aedc", 00:14:36.350 "is_configured": false, 00:14:36.350 "data_offset": 0, 00:14:36.350 "data_size": 63488 00:14:36.350 }, 00:14:36.350 { 00:14:36.350 "name": null, 00:14:36.350 "uuid": "f83ed9ed-c960-4de2-aad9-317c5d0a7b3a", 00:14:36.350 "is_configured": false, 00:14:36.350 "data_offset": 0, 00:14:36.350 "data_size": 63488 00:14:36.350 }, 00:14:36.350 { 00:14:36.350 "name": "BaseBdev3", 00:14:36.350 "uuid": "f1d087bf-b3b3-4784-b015-b4a9c5d2f49d", 00:14:36.350 "is_configured": true, 00:14:36.350 "data_offset": 2048, 00:14:36.350 "data_size": 63488 00:14:36.350 } 00:14:36.350 ] 00:14:36.350 }' 00:14:36.350 13:34:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:36.350 13:34:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:36.608 13:34:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:36.608 13:34:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.608 13:34:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:36.608 13:34:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:36.608 13:34:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.867 13:34:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:14:36.867 13:34:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:14:36.867 13:34:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.867 13:34:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:36.867 [2024-11-20 13:34:36.114218] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:36.867 13:34:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.867 13:34:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:14:36.867 13:34:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:36.867 13:34:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:36.867 13:34:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:36.867 13:34:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:36.867 13:34:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:36.867 13:34:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:36.867 13:34:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:36.867 13:34:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:36.867 13:34:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:36.867 13:34:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:36.867 13:34:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.867 13:34:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:36.867 13:34:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:36.867 13:34:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.867 13:34:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:36.867 "name": "Existed_Raid", 00:14:36.867 "uuid": "8931013c-57ee-4df2-af41-ba1ad33a01d2", 00:14:36.867 "strip_size_kb": 64, 00:14:36.867 "state": "configuring", 00:14:36.867 "raid_level": "concat", 00:14:36.867 "superblock": true, 00:14:36.867 "num_base_bdevs": 3, 00:14:36.867 "num_base_bdevs_discovered": 2, 00:14:36.867 "num_base_bdevs_operational": 3, 00:14:36.867 "base_bdevs_list": [ 00:14:36.867 { 00:14:36.867 "name": null, 00:14:36.867 "uuid": "d101eaa4-e9de-4310-bb8e-ac85bce5aedc", 00:14:36.867 "is_configured": false, 00:14:36.867 "data_offset": 0, 00:14:36.867 "data_size": 63488 00:14:36.867 }, 00:14:36.867 { 00:14:36.867 "name": "BaseBdev2", 00:14:36.867 "uuid": "f83ed9ed-c960-4de2-aad9-317c5d0a7b3a", 00:14:36.867 "is_configured": true, 00:14:36.867 "data_offset": 2048, 00:14:36.867 "data_size": 63488 00:14:36.867 }, 00:14:36.867 { 00:14:36.867 "name": "BaseBdev3", 00:14:36.867 "uuid": "f1d087bf-b3b3-4784-b015-b4a9c5d2f49d", 00:14:36.867 "is_configured": true, 00:14:36.867 "data_offset": 2048, 00:14:36.867 "data_size": 63488 00:14:36.867 } 00:14:36.867 ] 00:14:36.867 }' 00:14:36.867 13:34:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:36.867 13:34:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:37.125 13:34:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:37.125 13:34:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.125 13:34:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:37.125 13:34:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:37.125 13:34:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.384 13:34:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:14:37.384 13:34:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:37.384 13:34:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.384 13:34:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:14:37.384 13:34:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:37.384 13:34:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.384 13:34:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u d101eaa4-e9de-4310-bb8e-ac85bce5aedc 00:14:37.384 13:34:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.384 13:34:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:37.384 [2024-11-20 13:34:36.713923] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:14:37.384 [2024-11-20 13:34:36.714197] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:14:37.384 [2024-11-20 13:34:36.714218] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:14:37.384 [2024-11-20 13:34:36.714515] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:14:37.384 [2024-11-20 13:34:36.714669] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:14:37.384 [2024-11-20 13:34:36.714685] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:14:37.384 NewBaseBdev 00:14:37.384 [2024-11-20 13:34:36.714842] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:37.384 13:34:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.384 13:34:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:14:37.384 13:34:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:14:37.384 13:34:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:37.384 13:34:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:37.384 13:34:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:37.384 13:34:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:37.384 13:34:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:37.384 13:34:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.384 13:34:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:37.384 13:34:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.384 13:34:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:14:37.384 13:34:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.384 13:34:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:37.384 [ 00:14:37.384 { 00:14:37.384 "name": "NewBaseBdev", 00:14:37.384 "aliases": [ 00:14:37.384 "d101eaa4-e9de-4310-bb8e-ac85bce5aedc" 00:14:37.384 ], 00:14:37.384 "product_name": "Malloc disk", 00:14:37.384 "block_size": 512, 00:14:37.384 "num_blocks": 65536, 00:14:37.384 "uuid": "d101eaa4-e9de-4310-bb8e-ac85bce5aedc", 00:14:37.384 "assigned_rate_limits": { 00:14:37.384 "rw_ios_per_sec": 0, 00:14:37.384 "rw_mbytes_per_sec": 0, 00:14:37.384 "r_mbytes_per_sec": 0, 00:14:37.384 "w_mbytes_per_sec": 0 00:14:37.384 }, 00:14:37.384 "claimed": true, 00:14:37.384 "claim_type": "exclusive_write", 00:14:37.384 "zoned": false, 00:14:37.384 "supported_io_types": { 00:14:37.384 "read": true, 00:14:37.384 "write": true, 00:14:37.384 "unmap": true, 00:14:37.384 "flush": true, 00:14:37.384 "reset": true, 00:14:37.384 "nvme_admin": false, 00:14:37.384 "nvme_io": false, 00:14:37.384 "nvme_io_md": false, 00:14:37.384 "write_zeroes": true, 00:14:37.384 "zcopy": true, 00:14:37.384 "get_zone_info": false, 00:14:37.384 "zone_management": false, 00:14:37.384 "zone_append": false, 00:14:37.384 "compare": false, 00:14:37.384 "compare_and_write": false, 00:14:37.384 "abort": true, 00:14:37.384 "seek_hole": false, 00:14:37.384 "seek_data": false, 00:14:37.384 "copy": true, 00:14:37.384 "nvme_iov_md": false 00:14:37.384 }, 00:14:37.384 "memory_domains": [ 00:14:37.384 { 00:14:37.384 "dma_device_id": "system", 00:14:37.384 "dma_device_type": 1 00:14:37.384 }, 00:14:37.384 { 00:14:37.384 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:37.384 "dma_device_type": 2 00:14:37.384 } 00:14:37.384 ], 00:14:37.384 "driver_specific": {} 00:14:37.384 } 00:14:37.384 ] 00:14:37.384 13:34:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.384 13:34:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:37.384 13:34:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:14:37.384 13:34:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:37.384 13:34:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:37.384 13:34:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:37.384 13:34:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:37.384 13:34:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:37.384 13:34:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:37.384 13:34:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:37.384 13:34:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:37.384 13:34:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:37.384 13:34:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:37.384 13:34:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.384 13:34:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:37.384 13:34:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:37.384 13:34:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.384 13:34:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:37.384 "name": "Existed_Raid", 00:14:37.384 "uuid": "8931013c-57ee-4df2-af41-ba1ad33a01d2", 00:14:37.384 "strip_size_kb": 64, 00:14:37.384 "state": "online", 00:14:37.384 "raid_level": "concat", 00:14:37.384 "superblock": true, 00:14:37.384 "num_base_bdevs": 3, 00:14:37.384 "num_base_bdevs_discovered": 3, 00:14:37.384 "num_base_bdevs_operational": 3, 00:14:37.384 "base_bdevs_list": [ 00:14:37.384 { 00:14:37.384 "name": "NewBaseBdev", 00:14:37.384 "uuid": "d101eaa4-e9de-4310-bb8e-ac85bce5aedc", 00:14:37.385 "is_configured": true, 00:14:37.385 "data_offset": 2048, 00:14:37.385 "data_size": 63488 00:14:37.385 }, 00:14:37.385 { 00:14:37.385 "name": "BaseBdev2", 00:14:37.385 "uuid": "f83ed9ed-c960-4de2-aad9-317c5d0a7b3a", 00:14:37.385 "is_configured": true, 00:14:37.385 "data_offset": 2048, 00:14:37.385 "data_size": 63488 00:14:37.385 }, 00:14:37.385 { 00:14:37.385 "name": "BaseBdev3", 00:14:37.385 "uuid": "f1d087bf-b3b3-4784-b015-b4a9c5d2f49d", 00:14:37.385 "is_configured": true, 00:14:37.385 "data_offset": 2048, 00:14:37.385 "data_size": 63488 00:14:37.385 } 00:14:37.385 ] 00:14:37.385 }' 00:14:37.385 13:34:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:37.385 13:34:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:37.953 13:34:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:14:37.953 13:34:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:37.953 13:34:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:37.953 13:34:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:37.953 13:34:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:14:37.953 13:34:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:37.953 13:34:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:37.953 13:34:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:37.953 13:34:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.953 13:34:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:37.953 [2024-11-20 13:34:37.249534] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:37.953 13:34:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.953 13:34:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:37.953 "name": "Existed_Raid", 00:14:37.953 "aliases": [ 00:14:37.953 "8931013c-57ee-4df2-af41-ba1ad33a01d2" 00:14:37.953 ], 00:14:37.953 "product_name": "Raid Volume", 00:14:37.953 "block_size": 512, 00:14:37.953 "num_blocks": 190464, 00:14:37.953 "uuid": "8931013c-57ee-4df2-af41-ba1ad33a01d2", 00:14:37.953 "assigned_rate_limits": { 00:14:37.953 "rw_ios_per_sec": 0, 00:14:37.953 "rw_mbytes_per_sec": 0, 00:14:37.953 "r_mbytes_per_sec": 0, 00:14:37.953 "w_mbytes_per_sec": 0 00:14:37.953 }, 00:14:37.953 "claimed": false, 00:14:37.953 "zoned": false, 00:14:37.953 "supported_io_types": { 00:14:37.953 "read": true, 00:14:37.953 "write": true, 00:14:37.953 "unmap": true, 00:14:37.953 "flush": true, 00:14:37.953 "reset": true, 00:14:37.953 "nvme_admin": false, 00:14:37.953 "nvme_io": false, 00:14:37.953 "nvme_io_md": false, 00:14:37.953 "write_zeroes": true, 00:14:37.953 "zcopy": false, 00:14:37.953 "get_zone_info": false, 00:14:37.953 "zone_management": false, 00:14:37.953 "zone_append": false, 00:14:37.953 "compare": false, 00:14:37.953 "compare_and_write": false, 00:14:37.953 "abort": false, 00:14:37.953 "seek_hole": false, 00:14:37.953 "seek_data": false, 00:14:37.953 "copy": false, 00:14:37.953 "nvme_iov_md": false 00:14:37.953 }, 00:14:37.953 "memory_domains": [ 00:14:37.953 { 00:14:37.953 "dma_device_id": "system", 00:14:37.953 "dma_device_type": 1 00:14:37.953 }, 00:14:37.953 { 00:14:37.953 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:37.953 "dma_device_type": 2 00:14:37.953 }, 00:14:37.953 { 00:14:37.953 "dma_device_id": "system", 00:14:37.953 "dma_device_type": 1 00:14:37.953 }, 00:14:37.953 { 00:14:37.953 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:37.953 "dma_device_type": 2 00:14:37.953 }, 00:14:37.953 { 00:14:37.953 "dma_device_id": "system", 00:14:37.953 "dma_device_type": 1 00:14:37.953 }, 00:14:37.953 { 00:14:37.953 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:37.953 "dma_device_type": 2 00:14:37.953 } 00:14:37.953 ], 00:14:37.953 "driver_specific": { 00:14:37.953 "raid": { 00:14:37.953 "uuid": "8931013c-57ee-4df2-af41-ba1ad33a01d2", 00:14:37.953 "strip_size_kb": 64, 00:14:37.953 "state": "online", 00:14:37.953 "raid_level": "concat", 00:14:37.953 "superblock": true, 00:14:37.953 "num_base_bdevs": 3, 00:14:37.953 "num_base_bdevs_discovered": 3, 00:14:37.953 "num_base_bdevs_operational": 3, 00:14:37.953 "base_bdevs_list": [ 00:14:37.953 { 00:14:37.953 "name": "NewBaseBdev", 00:14:37.953 "uuid": "d101eaa4-e9de-4310-bb8e-ac85bce5aedc", 00:14:37.953 "is_configured": true, 00:14:37.953 "data_offset": 2048, 00:14:37.953 "data_size": 63488 00:14:37.953 }, 00:14:37.953 { 00:14:37.953 "name": "BaseBdev2", 00:14:37.953 "uuid": "f83ed9ed-c960-4de2-aad9-317c5d0a7b3a", 00:14:37.953 "is_configured": true, 00:14:37.953 "data_offset": 2048, 00:14:37.953 "data_size": 63488 00:14:37.953 }, 00:14:37.953 { 00:14:37.953 "name": "BaseBdev3", 00:14:37.953 "uuid": "f1d087bf-b3b3-4784-b015-b4a9c5d2f49d", 00:14:37.953 "is_configured": true, 00:14:37.953 "data_offset": 2048, 00:14:37.953 "data_size": 63488 00:14:37.953 } 00:14:37.953 ] 00:14:37.953 } 00:14:37.953 } 00:14:37.953 }' 00:14:37.953 13:34:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:37.953 13:34:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:14:37.953 BaseBdev2 00:14:37.953 BaseBdev3' 00:14:37.953 13:34:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:37.953 13:34:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:37.953 13:34:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:37.953 13:34:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:14:37.953 13:34:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.953 13:34:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:37.953 13:34:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:37.953 13:34:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.953 13:34:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:37.953 13:34:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:37.953 13:34:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:37.954 13:34:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:38.213 13:34:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.213 13:34:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:38.213 13:34:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:38.213 13:34:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.213 13:34:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:38.213 13:34:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:38.213 13:34:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:38.213 13:34:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:38.213 13:34:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:38.213 13:34:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.213 13:34:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:38.213 13:34:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.213 13:34:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:38.213 13:34:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:38.214 13:34:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:38.214 13:34:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.214 13:34:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:38.214 [2024-11-20 13:34:37.516853] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:38.214 [2024-11-20 13:34:37.516887] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:38.214 [2024-11-20 13:34:37.516974] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:38.214 [2024-11-20 13:34:37.517033] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:38.214 [2024-11-20 13:34:37.517049] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:14:38.214 13:34:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.214 13:34:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 65991 00:14:38.214 13:34:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 65991 ']' 00:14:38.214 13:34:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 65991 00:14:38.214 13:34:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:14:38.214 13:34:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:38.214 13:34:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65991 00:14:38.214 killing process with pid 65991 00:14:38.214 13:34:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:38.214 13:34:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:38.214 13:34:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65991' 00:14:38.214 13:34:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 65991 00:14:38.214 [2024-11-20 13:34:37.565612] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:38.214 13:34:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 65991 00:14:38.473 [2024-11-20 13:34:37.882479] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:39.850 ************************************ 00:14:39.850 END TEST raid_state_function_test_sb 00:14:39.850 ************************************ 00:14:39.850 13:34:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:14:39.850 00:14:39.850 real 0m10.690s 00:14:39.850 user 0m16.974s 00:14:39.850 sys 0m2.083s 00:14:39.850 13:34:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:39.850 13:34:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:39.850 13:34:39 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 3 00:14:39.850 13:34:39 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:14:39.850 13:34:39 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:39.850 13:34:39 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:39.850 ************************************ 00:14:39.850 START TEST raid_superblock_test 00:14:39.850 ************************************ 00:14:39.850 13:34:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test concat 3 00:14:39.850 13:34:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:14:39.850 13:34:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:14:39.850 13:34:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:14:39.850 13:34:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:14:39.850 13:34:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:14:39.850 13:34:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:14:39.850 13:34:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:14:39.850 13:34:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:14:39.850 13:34:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:14:39.851 13:34:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:14:39.851 13:34:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:14:39.851 13:34:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:14:39.851 13:34:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:14:39.851 13:34:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:14:39.851 13:34:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:14:39.851 13:34:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:14:39.851 13:34:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=66611 00:14:39.851 13:34:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 66611 00:14:39.851 13:34:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 66611 ']' 00:14:39.851 13:34:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:39.851 13:34:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:14:39.851 13:34:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:39.851 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:39.851 13:34:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:39.851 13:34:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:39.851 13:34:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.851 [2024-11-20 13:34:39.214679] Starting SPDK v25.01-pre git sha1 82b85d9ca / DPDK 24.03.0 initialization... 00:14:39.851 [2024-11-20 13:34:39.214798] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66611 ] 00:14:40.114 [2024-11-20 13:34:39.384831] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:40.114 [2024-11-20 13:34:39.508123] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:40.373 [2024-11-20 13:34:39.726963] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:40.373 [2024-11-20 13:34:39.727008] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:40.632 13:34:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:40.632 13:34:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:14:40.632 13:34:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:14:40.632 13:34:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:40.632 13:34:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:14:40.632 13:34:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:14:40.632 13:34:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:14:40.632 13:34:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:40.632 13:34:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:40.632 13:34:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:40.632 13:34:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:14:40.632 13:34:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.632 13:34:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.632 malloc1 00:14:40.632 13:34:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.633 13:34:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:40.633 13:34:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.633 13:34:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.892 [2024-11-20 13:34:40.119093] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:40.892 [2024-11-20 13:34:40.119153] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:40.892 [2024-11-20 13:34:40.119180] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:40.892 [2024-11-20 13:34:40.119193] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:40.892 [2024-11-20 13:34:40.121752] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:40.892 [2024-11-20 13:34:40.121789] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:40.892 pt1 00:14:40.892 13:34:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.892 13:34:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:40.892 13:34:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:40.892 13:34:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:14:40.892 13:34:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:14:40.892 13:34:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:14:40.892 13:34:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:40.892 13:34:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:40.892 13:34:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:40.892 13:34:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:14:40.892 13:34:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.892 13:34:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.892 malloc2 00:14:40.892 13:34:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.892 13:34:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:40.892 13:34:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.892 13:34:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.892 [2024-11-20 13:34:40.177556] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:40.892 [2024-11-20 13:34:40.177613] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:40.892 [2024-11-20 13:34:40.177645] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:40.892 [2024-11-20 13:34:40.177657] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:40.892 [2024-11-20 13:34:40.180165] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:40.892 [2024-11-20 13:34:40.180202] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:40.892 pt2 00:14:40.892 13:34:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.892 13:34:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:40.892 13:34:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:40.892 13:34:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:14:40.892 13:34:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:14:40.892 13:34:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:14:40.892 13:34:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:40.892 13:34:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:40.892 13:34:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:40.892 13:34:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:14:40.892 13:34:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.892 13:34:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.892 malloc3 00:14:40.892 13:34:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.892 13:34:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:40.892 13:34:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.892 13:34:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.892 [2024-11-20 13:34:40.246315] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:40.892 [2024-11-20 13:34:40.246366] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:40.892 [2024-11-20 13:34:40.246392] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:14:40.892 [2024-11-20 13:34:40.246405] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:40.892 [2024-11-20 13:34:40.248866] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:40.892 [2024-11-20 13:34:40.248900] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:40.892 pt3 00:14:40.892 13:34:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.892 13:34:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:40.892 13:34:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:40.892 13:34:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:14:40.892 13:34:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.892 13:34:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.892 [2024-11-20 13:34:40.254363] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:40.892 [2024-11-20 13:34:40.256584] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:40.892 [2024-11-20 13:34:40.256667] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:40.892 [2024-11-20 13:34:40.256835] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:14:40.892 [2024-11-20 13:34:40.256861] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:14:40.892 [2024-11-20 13:34:40.257187] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:14:40.893 [2024-11-20 13:34:40.257398] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:14:40.893 [2024-11-20 13:34:40.257415] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:14:40.893 [2024-11-20 13:34:40.257582] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:40.893 13:34:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.893 13:34:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:14:40.893 13:34:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:40.893 13:34:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:40.893 13:34:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:40.893 13:34:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:40.893 13:34:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:40.893 13:34:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:40.893 13:34:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:40.893 13:34:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:40.893 13:34:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:40.893 13:34:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:40.893 13:34:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:40.893 13:34:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.893 13:34:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.893 13:34:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.893 13:34:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:40.893 "name": "raid_bdev1", 00:14:40.893 "uuid": "377ab154-6bed-4b90-a706-ac05a0e0c47a", 00:14:40.893 "strip_size_kb": 64, 00:14:40.893 "state": "online", 00:14:40.893 "raid_level": "concat", 00:14:40.893 "superblock": true, 00:14:40.893 "num_base_bdevs": 3, 00:14:40.893 "num_base_bdevs_discovered": 3, 00:14:40.893 "num_base_bdevs_operational": 3, 00:14:40.893 "base_bdevs_list": [ 00:14:40.893 { 00:14:40.893 "name": "pt1", 00:14:40.893 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:40.893 "is_configured": true, 00:14:40.893 "data_offset": 2048, 00:14:40.893 "data_size": 63488 00:14:40.893 }, 00:14:40.893 { 00:14:40.893 "name": "pt2", 00:14:40.893 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:40.893 "is_configured": true, 00:14:40.893 "data_offset": 2048, 00:14:40.893 "data_size": 63488 00:14:40.893 }, 00:14:40.893 { 00:14:40.893 "name": "pt3", 00:14:40.893 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:40.893 "is_configured": true, 00:14:40.893 "data_offset": 2048, 00:14:40.893 "data_size": 63488 00:14:40.893 } 00:14:40.893 ] 00:14:40.893 }' 00:14:40.893 13:34:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:40.893 13:34:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.458 13:34:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:14:41.458 13:34:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:14:41.458 13:34:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:41.458 13:34:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:41.458 13:34:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:41.458 13:34:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:41.458 13:34:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:41.458 13:34:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:41.458 13:34:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.458 13:34:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.458 [2024-11-20 13:34:40.730529] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:41.458 13:34:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.458 13:34:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:41.458 "name": "raid_bdev1", 00:14:41.458 "aliases": [ 00:14:41.458 "377ab154-6bed-4b90-a706-ac05a0e0c47a" 00:14:41.458 ], 00:14:41.458 "product_name": "Raid Volume", 00:14:41.458 "block_size": 512, 00:14:41.458 "num_blocks": 190464, 00:14:41.458 "uuid": "377ab154-6bed-4b90-a706-ac05a0e0c47a", 00:14:41.458 "assigned_rate_limits": { 00:14:41.458 "rw_ios_per_sec": 0, 00:14:41.458 "rw_mbytes_per_sec": 0, 00:14:41.458 "r_mbytes_per_sec": 0, 00:14:41.458 "w_mbytes_per_sec": 0 00:14:41.458 }, 00:14:41.458 "claimed": false, 00:14:41.458 "zoned": false, 00:14:41.458 "supported_io_types": { 00:14:41.458 "read": true, 00:14:41.458 "write": true, 00:14:41.458 "unmap": true, 00:14:41.458 "flush": true, 00:14:41.458 "reset": true, 00:14:41.458 "nvme_admin": false, 00:14:41.458 "nvme_io": false, 00:14:41.458 "nvme_io_md": false, 00:14:41.458 "write_zeroes": true, 00:14:41.458 "zcopy": false, 00:14:41.458 "get_zone_info": false, 00:14:41.458 "zone_management": false, 00:14:41.458 "zone_append": false, 00:14:41.458 "compare": false, 00:14:41.458 "compare_and_write": false, 00:14:41.458 "abort": false, 00:14:41.458 "seek_hole": false, 00:14:41.458 "seek_data": false, 00:14:41.458 "copy": false, 00:14:41.458 "nvme_iov_md": false 00:14:41.459 }, 00:14:41.459 "memory_domains": [ 00:14:41.459 { 00:14:41.459 "dma_device_id": "system", 00:14:41.459 "dma_device_type": 1 00:14:41.459 }, 00:14:41.459 { 00:14:41.459 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:41.459 "dma_device_type": 2 00:14:41.459 }, 00:14:41.459 { 00:14:41.459 "dma_device_id": "system", 00:14:41.459 "dma_device_type": 1 00:14:41.459 }, 00:14:41.459 { 00:14:41.459 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:41.459 "dma_device_type": 2 00:14:41.459 }, 00:14:41.459 { 00:14:41.459 "dma_device_id": "system", 00:14:41.459 "dma_device_type": 1 00:14:41.459 }, 00:14:41.459 { 00:14:41.459 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:41.459 "dma_device_type": 2 00:14:41.459 } 00:14:41.459 ], 00:14:41.459 "driver_specific": { 00:14:41.459 "raid": { 00:14:41.459 "uuid": "377ab154-6bed-4b90-a706-ac05a0e0c47a", 00:14:41.459 "strip_size_kb": 64, 00:14:41.459 "state": "online", 00:14:41.459 "raid_level": "concat", 00:14:41.459 "superblock": true, 00:14:41.459 "num_base_bdevs": 3, 00:14:41.459 "num_base_bdevs_discovered": 3, 00:14:41.459 "num_base_bdevs_operational": 3, 00:14:41.459 "base_bdevs_list": [ 00:14:41.459 { 00:14:41.459 "name": "pt1", 00:14:41.459 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:41.459 "is_configured": true, 00:14:41.459 "data_offset": 2048, 00:14:41.459 "data_size": 63488 00:14:41.459 }, 00:14:41.459 { 00:14:41.459 "name": "pt2", 00:14:41.459 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:41.459 "is_configured": true, 00:14:41.459 "data_offset": 2048, 00:14:41.459 "data_size": 63488 00:14:41.459 }, 00:14:41.459 { 00:14:41.459 "name": "pt3", 00:14:41.459 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:41.459 "is_configured": true, 00:14:41.459 "data_offset": 2048, 00:14:41.459 "data_size": 63488 00:14:41.459 } 00:14:41.459 ] 00:14:41.459 } 00:14:41.459 } 00:14:41.459 }' 00:14:41.459 13:34:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:41.459 13:34:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:14:41.459 pt2 00:14:41.459 pt3' 00:14:41.459 13:34:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:41.459 13:34:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:41.459 13:34:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:41.459 13:34:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:14:41.459 13:34:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.459 13:34:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:41.459 13:34:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.459 13:34:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.459 13:34:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:41.459 13:34:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:41.459 13:34:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:41.459 13:34:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:14:41.459 13:34:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.459 13:34:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.459 13:34:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:41.459 13:34:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.766 13:34:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:41.766 13:34:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:41.766 13:34:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:41.766 13:34:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:41.766 13:34:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:14:41.766 13:34:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.766 13:34:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.766 13:34:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.766 13:34:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:41.766 13:34:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:41.766 13:34:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:41.766 13:34:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.766 13:34:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.766 13:34:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:14:41.766 [2024-11-20 13:34:40.994501] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:41.766 13:34:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.766 13:34:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=377ab154-6bed-4b90-a706-ac05a0e0c47a 00:14:41.766 13:34:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 377ab154-6bed-4b90-a706-ac05a0e0c47a ']' 00:14:41.766 13:34:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:41.766 13:34:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.766 13:34:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.766 [2024-11-20 13:34:41.038183] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:41.766 [2024-11-20 13:34:41.038223] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:41.766 [2024-11-20 13:34:41.038310] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:41.766 [2024-11-20 13:34:41.038377] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:41.766 [2024-11-20 13:34:41.038389] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:14:41.766 13:34:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.766 13:34:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:14:41.766 13:34:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:41.766 13:34:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.766 13:34:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.766 13:34:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.766 13:34:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:14:41.766 13:34:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:14:41.766 13:34:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:41.766 13:34:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:14:41.766 13:34:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.766 13:34:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.766 13:34:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.766 13:34:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:41.766 13:34:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:14:41.766 13:34:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.766 13:34:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.766 13:34:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.766 13:34:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:41.766 13:34:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:14:41.766 13:34:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.766 13:34:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.766 13:34:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.766 13:34:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:14:41.766 13:34:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:14:41.766 13:34:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.766 13:34:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.766 13:34:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.766 13:34:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:14:41.766 13:34:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:14:41.766 13:34:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:14:41.766 13:34:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:14:41.766 13:34:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:14:41.766 13:34:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:41.766 13:34:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:14:41.766 13:34:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:41.766 13:34:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:14:41.766 13:34:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.766 13:34:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.766 [2024-11-20 13:34:41.150259] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:14:41.766 [2024-11-20 13:34:41.152643] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:14:41.766 [2024-11-20 13:34:41.152699] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:14:41.766 [2024-11-20 13:34:41.152752] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:14:41.766 [2024-11-20 13:34:41.152809] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:14:41.766 [2024-11-20 13:34:41.152833] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:14:41.766 [2024-11-20 13:34:41.152856] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:41.766 [2024-11-20 13:34:41.152868] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:14:41.766 request: 00:14:41.766 { 00:14:41.766 "name": "raid_bdev1", 00:14:41.766 "raid_level": "concat", 00:14:41.766 "base_bdevs": [ 00:14:41.766 "malloc1", 00:14:41.766 "malloc2", 00:14:41.766 "malloc3" 00:14:41.766 ], 00:14:41.766 "strip_size_kb": 64, 00:14:41.766 "superblock": false, 00:14:41.766 "method": "bdev_raid_create", 00:14:41.766 "req_id": 1 00:14:41.766 } 00:14:41.766 Got JSON-RPC error response 00:14:41.766 response: 00:14:41.766 { 00:14:41.766 "code": -17, 00:14:41.766 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:14:41.766 } 00:14:41.766 13:34:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:14:41.766 13:34:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:14:41.766 13:34:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:41.766 13:34:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:41.767 13:34:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:41.767 13:34:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:41.767 13:34:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:14:41.767 13:34:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.767 13:34:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.767 13:34:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.767 13:34:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:14:41.767 13:34:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:14:41.767 13:34:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:41.767 13:34:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.767 13:34:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.767 [2024-11-20 13:34:41.210185] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:41.767 [2024-11-20 13:34:41.210359] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:41.767 [2024-11-20 13:34:41.210421] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:14:41.767 [2024-11-20 13:34:41.210517] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:41.767 [2024-11-20 13:34:41.213136] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:41.767 [2024-11-20 13:34:41.213271] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:41.767 [2024-11-20 13:34:41.213433] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:14:41.767 [2024-11-20 13:34:41.213572] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:41.767 pt1 00:14:41.767 13:34:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.767 13:34:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:14:41.767 13:34:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:41.767 13:34:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:41.767 13:34:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:41.767 13:34:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:41.767 13:34:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:41.767 13:34:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:41.767 13:34:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:41.767 13:34:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:41.767 13:34:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:41.767 13:34:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:41.767 13:34:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.767 13:34:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.767 13:34:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:41.767 13:34:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.025 13:34:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:42.025 "name": "raid_bdev1", 00:14:42.025 "uuid": "377ab154-6bed-4b90-a706-ac05a0e0c47a", 00:14:42.025 "strip_size_kb": 64, 00:14:42.025 "state": "configuring", 00:14:42.025 "raid_level": "concat", 00:14:42.025 "superblock": true, 00:14:42.025 "num_base_bdevs": 3, 00:14:42.025 "num_base_bdevs_discovered": 1, 00:14:42.025 "num_base_bdevs_operational": 3, 00:14:42.025 "base_bdevs_list": [ 00:14:42.025 { 00:14:42.025 "name": "pt1", 00:14:42.025 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:42.025 "is_configured": true, 00:14:42.025 "data_offset": 2048, 00:14:42.025 "data_size": 63488 00:14:42.025 }, 00:14:42.025 { 00:14:42.025 "name": null, 00:14:42.025 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:42.025 "is_configured": false, 00:14:42.025 "data_offset": 2048, 00:14:42.025 "data_size": 63488 00:14:42.025 }, 00:14:42.025 { 00:14:42.025 "name": null, 00:14:42.025 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:42.025 "is_configured": false, 00:14:42.025 "data_offset": 2048, 00:14:42.025 "data_size": 63488 00:14:42.025 } 00:14:42.025 ] 00:14:42.025 }' 00:14:42.025 13:34:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:42.025 13:34:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.283 13:34:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:14:42.283 13:34:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:42.283 13:34:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.283 13:34:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.283 [2024-11-20 13:34:41.617909] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:42.283 [2024-11-20 13:34:41.618109] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:42.283 [2024-11-20 13:34:41.618149] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:14:42.283 [2024-11-20 13:34:41.618162] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:42.283 [2024-11-20 13:34:41.618652] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:42.283 [2024-11-20 13:34:41.618674] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:42.283 [2024-11-20 13:34:41.618766] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:14:42.283 [2024-11-20 13:34:41.618795] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:42.283 pt2 00:14:42.283 13:34:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.283 13:34:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:14:42.283 13:34:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.283 13:34:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.283 [2024-11-20 13:34:41.629872] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:14:42.283 13:34:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.283 13:34:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:14:42.283 13:34:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:42.283 13:34:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:42.283 13:34:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:42.283 13:34:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:42.283 13:34:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:42.283 13:34:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:42.283 13:34:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:42.283 13:34:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:42.283 13:34:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:42.283 13:34:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:42.283 13:34:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.283 13:34:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.283 13:34:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:42.283 13:34:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.283 13:34:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:42.283 "name": "raid_bdev1", 00:14:42.283 "uuid": "377ab154-6bed-4b90-a706-ac05a0e0c47a", 00:14:42.283 "strip_size_kb": 64, 00:14:42.283 "state": "configuring", 00:14:42.283 "raid_level": "concat", 00:14:42.283 "superblock": true, 00:14:42.283 "num_base_bdevs": 3, 00:14:42.283 "num_base_bdevs_discovered": 1, 00:14:42.283 "num_base_bdevs_operational": 3, 00:14:42.283 "base_bdevs_list": [ 00:14:42.283 { 00:14:42.283 "name": "pt1", 00:14:42.283 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:42.283 "is_configured": true, 00:14:42.283 "data_offset": 2048, 00:14:42.283 "data_size": 63488 00:14:42.283 }, 00:14:42.283 { 00:14:42.283 "name": null, 00:14:42.283 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:42.283 "is_configured": false, 00:14:42.283 "data_offset": 0, 00:14:42.283 "data_size": 63488 00:14:42.283 }, 00:14:42.283 { 00:14:42.283 "name": null, 00:14:42.283 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:42.283 "is_configured": false, 00:14:42.283 "data_offset": 2048, 00:14:42.283 "data_size": 63488 00:14:42.283 } 00:14:42.283 ] 00:14:42.283 }' 00:14:42.283 13:34:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:42.283 13:34:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.849 13:34:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:14:42.849 13:34:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:42.849 13:34:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:42.849 13:34:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.849 13:34:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.849 [2024-11-20 13:34:42.069237] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:42.850 [2024-11-20 13:34:42.069309] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:42.850 [2024-11-20 13:34:42.069330] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:14:42.850 [2024-11-20 13:34:42.069344] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:42.850 [2024-11-20 13:34:42.069801] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:42.850 [2024-11-20 13:34:42.069825] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:42.850 [2024-11-20 13:34:42.069908] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:14:42.850 [2024-11-20 13:34:42.069934] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:42.850 pt2 00:14:42.850 13:34:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.850 13:34:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:14:42.850 13:34:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:42.850 13:34:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:42.850 13:34:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.850 13:34:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.850 [2024-11-20 13:34:42.077212] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:42.850 [2024-11-20 13:34:42.077374] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:42.850 [2024-11-20 13:34:42.077399] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:14:42.850 [2024-11-20 13:34:42.077413] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:42.850 [2024-11-20 13:34:42.077791] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:42.850 [2024-11-20 13:34:42.077815] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:42.850 [2024-11-20 13:34:42.077879] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:14:42.850 [2024-11-20 13:34:42.077901] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:42.850 [2024-11-20 13:34:42.078011] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:14:42.850 [2024-11-20 13:34:42.078024] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:14:42.850 [2024-11-20 13:34:42.078311] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:14:42.850 [2024-11-20 13:34:42.078451] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:14:42.850 [2024-11-20 13:34:42.078461] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:14:42.850 [2024-11-20 13:34:42.078607] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:42.850 pt3 00:14:42.850 13:34:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.850 13:34:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:14:42.850 13:34:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:42.850 13:34:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:14:42.850 13:34:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:42.850 13:34:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:42.850 13:34:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:42.850 13:34:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:42.850 13:34:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:42.850 13:34:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:42.850 13:34:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:42.850 13:34:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:42.850 13:34:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:42.850 13:34:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:42.850 13:34:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:42.850 13:34:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.850 13:34:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.850 13:34:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.850 13:34:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:42.850 "name": "raid_bdev1", 00:14:42.850 "uuid": "377ab154-6bed-4b90-a706-ac05a0e0c47a", 00:14:42.850 "strip_size_kb": 64, 00:14:42.850 "state": "online", 00:14:42.850 "raid_level": "concat", 00:14:42.850 "superblock": true, 00:14:42.850 "num_base_bdevs": 3, 00:14:42.850 "num_base_bdevs_discovered": 3, 00:14:42.850 "num_base_bdevs_operational": 3, 00:14:42.850 "base_bdevs_list": [ 00:14:42.850 { 00:14:42.850 "name": "pt1", 00:14:42.850 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:42.850 "is_configured": true, 00:14:42.850 "data_offset": 2048, 00:14:42.850 "data_size": 63488 00:14:42.850 }, 00:14:42.850 { 00:14:42.850 "name": "pt2", 00:14:42.850 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:42.850 "is_configured": true, 00:14:42.850 "data_offset": 2048, 00:14:42.850 "data_size": 63488 00:14:42.850 }, 00:14:42.850 { 00:14:42.850 "name": "pt3", 00:14:42.850 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:42.850 "is_configured": true, 00:14:42.850 "data_offset": 2048, 00:14:42.850 "data_size": 63488 00:14:42.850 } 00:14:42.850 ] 00:14:42.850 }' 00:14:42.850 13:34:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:42.850 13:34:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.108 13:34:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:14:43.108 13:34:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:14:43.108 13:34:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:43.108 13:34:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:43.108 13:34:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:43.108 13:34:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:43.108 13:34:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:43.108 13:34:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:43.108 13:34:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.108 13:34:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.108 [2024-11-20 13:34:42.568941] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:43.367 13:34:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.367 13:34:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:43.367 "name": "raid_bdev1", 00:14:43.367 "aliases": [ 00:14:43.367 "377ab154-6bed-4b90-a706-ac05a0e0c47a" 00:14:43.367 ], 00:14:43.367 "product_name": "Raid Volume", 00:14:43.367 "block_size": 512, 00:14:43.367 "num_blocks": 190464, 00:14:43.367 "uuid": "377ab154-6bed-4b90-a706-ac05a0e0c47a", 00:14:43.367 "assigned_rate_limits": { 00:14:43.367 "rw_ios_per_sec": 0, 00:14:43.367 "rw_mbytes_per_sec": 0, 00:14:43.367 "r_mbytes_per_sec": 0, 00:14:43.367 "w_mbytes_per_sec": 0 00:14:43.367 }, 00:14:43.367 "claimed": false, 00:14:43.367 "zoned": false, 00:14:43.367 "supported_io_types": { 00:14:43.367 "read": true, 00:14:43.367 "write": true, 00:14:43.367 "unmap": true, 00:14:43.367 "flush": true, 00:14:43.367 "reset": true, 00:14:43.367 "nvme_admin": false, 00:14:43.367 "nvme_io": false, 00:14:43.367 "nvme_io_md": false, 00:14:43.367 "write_zeroes": true, 00:14:43.367 "zcopy": false, 00:14:43.367 "get_zone_info": false, 00:14:43.367 "zone_management": false, 00:14:43.367 "zone_append": false, 00:14:43.367 "compare": false, 00:14:43.367 "compare_and_write": false, 00:14:43.367 "abort": false, 00:14:43.367 "seek_hole": false, 00:14:43.367 "seek_data": false, 00:14:43.367 "copy": false, 00:14:43.367 "nvme_iov_md": false 00:14:43.367 }, 00:14:43.367 "memory_domains": [ 00:14:43.367 { 00:14:43.367 "dma_device_id": "system", 00:14:43.367 "dma_device_type": 1 00:14:43.367 }, 00:14:43.367 { 00:14:43.367 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:43.367 "dma_device_type": 2 00:14:43.367 }, 00:14:43.367 { 00:14:43.367 "dma_device_id": "system", 00:14:43.367 "dma_device_type": 1 00:14:43.367 }, 00:14:43.368 { 00:14:43.368 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:43.368 "dma_device_type": 2 00:14:43.368 }, 00:14:43.368 { 00:14:43.368 "dma_device_id": "system", 00:14:43.368 "dma_device_type": 1 00:14:43.368 }, 00:14:43.368 { 00:14:43.368 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:43.368 "dma_device_type": 2 00:14:43.368 } 00:14:43.368 ], 00:14:43.368 "driver_specific": { 00:14:43.368 "raid": { 00:14:43.368 "uuid": "377ab154-6bed-4b90-a706-ac05a0e0c47a", 00:14:43.368 "strip_size_kb": 64, 00:14:43.368 "state": "online", 00:14:43.368 "raid_level": "concat", 00:14:43.368 "superblock": true, 00:14:43.368 "num_base_bdevs": 3, 00:14:43.368 "num_base_bdevs_discovered": 3, 00:14:43.368 "num_base_bdevs_operational": 3, 00:14:43.368 "base_bdevs_list": [ 00:14:43.368 { 00:14:43.368 "name": "pt1", 00:14:43.368 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:43.368 "is_configured": true, 00:14:43.368 "data_offset": 2048, 00:14:43.368 "data_size": 63488 00:14:43.368 }, 00:14:43.368 { 00:14:43.368 "name": "pt2", 00:14:43.368 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:43.368 "is_configured": true, 00:14:43.368 "data_offset": 2048, 00:14:43.368 "data_size": 63488 00:14:43.368 }, 00:14:43.368 { 00:14:43.368 "name": "pt3", 00:14:43.368 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:43.368 "is_configured": true, 00:14:43.368 "data_offset": 2048, 00:14:43.368 "data_size": 63488 00:14:43.368 } 00:14:43.368 ] 00:14:43.368 } 00:14:43.368 } 00:14:43.368 }' 00:14:43.368 13:34:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:43.368 13:34:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:14:43.368 pt2 00:14:43.368 pt3' 00:14:43.368 13:34:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:43.368 13:34:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:43.368 13:34:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:43.368 13:34:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:43.368 13:34:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:14:43.368 13:34:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.368 13:34:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.368 13:34:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.368 13:34:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:43.368 13:34:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:43.368 13:34:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:43.368 13:34:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:14:43.368 13:34:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:43.368 13:34:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.368 13:34:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.368 13:34:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.368 13:34:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:43.368 13:34:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:43.368 13:34:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:43.368 13:34:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:14:43.368 13:34:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.368 13:34:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:43.368 13:34:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.368 13:34:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.368 13:34:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:43.368 13:34:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:43.368 13:34:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:43.368 13:34:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:14:43.368 13:34:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.368 13:34:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.368 [2024-11-20 13:34:42.828507] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:43.627 13:34:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.628 13:34:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 377ab154-6bed-4b90-a706-ac05a0e0c47a '!=' 377ab154-6bed-4b90-a706-ac05a0e0c47a ']' 00:14:43.628 13:34:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:14:43.628 13:34:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:43.628 13:34:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:14:43.628 13:34:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 66611 00:14:43.628 13:34:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 66611 ']' 00:14:43.628 13:34:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 66611 00:14:43.628 13:34:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:14:43.628 13:34:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:43.628 13:34:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66611 00:14:43.628 killing process with pid 66611 00:14:43.628 13:34:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:43.628 13:34:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:43.628 13:34:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66611' 00:14:43.628 13:34:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 66611 00:14:43.628 [2024-11-20 13:34:42.909482] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:43.628 [2024-11-20 13:34:42.909581] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:43.628 13:34:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 66611 00:14:43.628 [2024-11-20 13:34:42.909644] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:43.628 [2024-11-20 13:34:42.909659] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:14:43.886 [2024-11-20 13:34:43.219147] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:45.266 13:34:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:14:45.266 00:14:45.266 real 0m5.251s 00:14:45.266 user 0m7.552s 00:14:45.266 sys 0m0.978s 00:14:45.266 ************************************ 00:14:45.266 END TEST raid_superblock_test 00:14:45.266 ************************************ 00:14:45.266 13:34:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:45.266 13:34:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.266 13:34:44 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 3 read 00:14:45.266 13:34:44 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:14:45.266 13:34:44 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:45.266 13:34:44 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:45.266 ************************************ 00:14:45.266 START TEST raid_read_error_test 00:14:45.266 ************************************ 00:14:45.266 13:34:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 3 read 00:14:45.266 13:34:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:14:45.266 13:34:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:14:45.266 13:34:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:14:45.266 13:34:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:14:45.266 13:34:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:45.266 13:34:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:14:45.266 13:34:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:45.266 13:34:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:45.266 13:34:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:14:45.266 13:34:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:45.266 13:34:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:45.266 13:34:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:14:45.266 13:34:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:45.266 13:34:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:45.266 13:34:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:14:45.266 13:34:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:14:45.266 13:34:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:14:45.266 13:34:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:14:45.266 13:34:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:14:45.266 13:34:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:14:45.266 13:34:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:14:45.266 13:34:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:14:45.266 13:34:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:14:45.266 13:34:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:14:45.266 13:34:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:14:45.266 13:34:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.tM0eGvik6D 00:14:45.266 13:34:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=66870 00:14:45.266 13:34:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:14:45.266 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:45.266 13:34:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 66870 00:14:45.266 13:34:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 66870 ']' 00:14:45.266 13:34:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:45.266 13:34:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:45.266 13:34:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:45.266 13:34:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:45.266 13:34:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.266 [2024-11-20 13:34:44.560543] Starting SPDK v25.01-pre git sha1 82b85d9ca / DPDK 24.03.0 initialization... 00:14:45.266 [2024-11-20 13:34:44.560689] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66870 ] 00:14:45.266 [2024-11-20 13:34:44.737531] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:45.525 [2024-11-20 13:34:44.873962] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:45.784 [2024-11-20 13:34:45.094318] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:45.784 [2024-11-20 13:34:45.094384] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:46.043 13:34:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:46.043 13:34:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:14:46.043 13:34:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:46.043 13:34:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:46.043 13:34:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.043 13:34:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.043 BaseBdev1_malloc 00:14:46.043 13:34:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.043 13:34:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:14:46.043 13:34:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.043 13:34:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.043 true 00:14:46.043 13:34:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.043 13:34:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:14:46.043 13:34:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.043 13:34:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.043 [2024-11-20 13:34:45.491840] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:14:46.043 [2024-11-20 13:34:45.492041] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:46.043 [2024-11-20 13:34:45.492089] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:14:46.043 [2024-11-20 13:34:45.492106] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:46.043 [2024-11-20 13:34:45.494657] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:46.043 [2024-11-20 13:34:45.494705] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:46.043 BaseBdev1 00:14:46.043 13:34:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.043 13:34:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:46.043 13:34:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:46.043 13:34:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.043 13:34:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.303 BaseBdev2_malloc 00:14:46.303 13:34:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.303 13:34:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:14:46.303 13:34:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.303 13:34:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.303 true 00:14:46.303 13:34:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.303 13:34:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:14:46.303 13:34:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.303 13:34:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.303 [2024-11-20 13:34:45.562739] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:14:46.303 [2024-11-20 13:34:45.562938] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:46.303 [2024-11-20 13:34:45.562968] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:14:46.303 [2024-11-20 13:34:45.562984] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:46.303 [2024-11-20 13:34:45.565508] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:46.303 [2024-11-20 13:34:45.565554] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:46.303 BaseBdev2 00:14:46.303 13:34:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.303 13:34:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:46.303 13:34:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:14:46.303 13:34:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.303 13:34:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.303 BaseBdev3_malloc 00:14:46.303 13:34:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.303 13:34:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:14:46.303 13:34:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.303 13:34:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.303 true 00:14:46.303 13:34:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.303 13:34:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:14:46.303 13:34:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.303 13:34:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.303 [2024-11-20 13:34:45.652442] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:14:46.303 [2024-11-20 13:34:45.652505] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:46.303 [2024-11-20 13:34:45.652526] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:14:46.303 [2024-11-20 13:34:45.652541] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:46.303 [2024-11-20 13:34:45.655110] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:46.303 [2024-11-20 13:34:45.655159] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:14:46.303 BaseBdev3 00:14:46.303 13:34:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.303 13:34:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:14:46.303 13:34:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.303 13:34:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.303 [2024-11-20 13:34:45.664525] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:46.303 [2024-11-20 13:34:45.666756] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:46.303 [2024-11-20 13:34:45.666836] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:46.303 [2024-11-20 13:34:45.667044] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:14:46.303 [2024-11-20 13:34:45.667071] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:14:46.303 [2024-11-20 13:34:45.667372] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:14:46.303 [2024-11-20 13:34:45.667529] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:14:46.303 [2024-11-20 13:34:45.667546] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:14:46.303 [2024-11-20 13:34:45.667712] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:46.303 13:34:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.303 13:34:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:14:46.303 13:34:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:46.303 13:34:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:46.303 13:34:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:46.303 13:34:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:46.303 13:34:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:46.303 13:34:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:46.303 13:34:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:46.303 13:34:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:46.303 13:34:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:46.303 13:34:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:46.303 13:34:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.303 13:34:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.303 13:34:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:46.303 13:34:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.303 13:34:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:46.303 "name": "raid_bdev1", 00:14:46.303 "uuid": "881e8fb2-108f-4244-aa2b-1d19388d8c65", 00:14:46.303 "strip_size_kb": 64, 00:14:46.303 "state": "online", 00:14:46.303 "raid_level": "concat", 00:14:46.303 "superblock": true, 00:14:46.303 "num_base_bdevs": 3, 00:14:46.303 "num_base_bdevs_discovered": 3, 00:14:46.303 "num_base_bdevs_operational": 3, 00:14:46.303 "base_bdevs_list": [ 00:14:46.303 { 00:14:46.303 "name": "BaseBdev1", 00:14:46.303 "uuid": "899115ec-d7d5-58c2-a50b-a1f296bb6f38", 00:14:46.303 "is_configured": true, 00:14:46.303 "data_offset": 2048, 00:14:46.303 "data_size": 63488 00:14:46.303 }, 00:14:46.303 { 00:14:46.303 "name": "BaseBdev2", 00:14:46.303 "uuid": "6967a391-288b-5b30-adc0-aea2d887ddb4", 00:14:46.303 "is_configured": true, 00:14:46.303 "data_offset": 2048, 00:14:46.303 "data_size": 63488 00:14:46.303 }, 00:14:46.303 { 00:14:46.303 "name": "BaseBdev3", 00:14:46.303 "uuid": "5fb5d85e-6674-568e-b923-71f5602219f1", 00:14:46.303 "is_configured": true, 00:14:46.303 "data_offset": 2048, 00:14:46.303 "data_size": 63488 00:14:46.303 } 00:14:46.303 ] 00:14:46.303 }' 00:14:46.303 13:34:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:46.303 13:34:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.871 13:34:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:14:46.871 13:34:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:14:46.871 [2024-11-20 13:34:46.209694] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:14:47.821 13:34:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:14:47.821 13:34:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.821 13:34:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.821 13:34:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.821 13:34:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:14:47.821 13:34:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:14:47.821 13:34:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:14:47.821 13:34:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:14:47.821 13:34:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:47.821 13:34:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:47.821 13:34:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:47.821 13:34:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:47.821 13:34:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:47.821 13:34:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:47.821 13:34:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:47.821 13:34:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:47.821 13:34:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:47.821 13:34:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:47.821 13:34:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.821 13:34:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.821 13:34:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:47.821 13:34:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.821 13:34:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:47.821 "name": "raid_bdev1", 00:14:47.821 "uuid": "881e8fb2-108f-4244-aa2b-1d19388d8c65", 00:14:47.821 "strip_size_kb": 64, 00:14:47.821 "state": "online", 00:14:47.821 "raid_level": "concat", 00:14:47.821 "superblock": true, 00:14:47.821 "num_base_bdevs": 3, 00:14:47.821 "num_base_bdevs_discovered": 3, 00:14:47.821 "num_base_bdevs_operational": 3, 00:14:47.821 "base_bdevs_list": [ 00:14:47.821 { 00:14:47.821 "name": "BaseBdev1", 00:14:47.821 "uuid": "899115ec-d7d5-58c2-a50b-a1f296bb6f38", 00:14:47.821 "is_configured": true, 00:14:47.821 "data_offset": 2048, 00:14:47.821 "data_size": 63488 00:14:47.821 }, 00:14:47.821 { 00:14:47.821 "name": "BaseBdev2", 00:14:47.821 "uuid": "6967a391-288b-5b30-adc0-aea2d887ddb4", 00:14:47.821 "is_configured": true, 00:14:47.821 "data_offset": 2048, 00:14:47.821 "data_size": 63488 00:14:47.821 }, 00:14:47.821 { 00:14:47.821 "name": "BaseBdev3", 00:14:47.821 "uuid": "5fb5d85e-6674-568e-b923-71f5602219f1", 00:14:47.821 "is_configured": true, 00:14:47.821 "data_offset": 2048, 00:14:47.821 "data_size": 63488 00:14:47.821 } 00:14:47.821 ] 00:14:47.821 }' 00:14:47.821 13:34:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:47.821 13:34:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.081 13:34:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:48.081 13:34:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.081 13:34:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.341 [2024-11-20 13:34:47.572291] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:48.341 [2024-11-20 13:34:47.572478] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:48.341 [2024-11-20 13:34:47.575430] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:48.341 [2024-11-20 13:34:47.575493] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:48.341 [2024-11-20 13:34:47.575534] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:48.341 [2024-11-20 13:34:47.575545] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:14:48.341 { 00:14:48.341 "results": [ 00:14:48.341 { 00:14:48.341 "job": "raid_bdev1", 00:14:48.341 "core_mask": "0x1", 00:14:48.341 "workload": "randrw", 00:14:48.341 "percentage": 50, 00:14:48.341 "status": "finished", 00:14:48.341 "queue_depth": 1, 00:14:48.341 "io_size": 131072, 00:14:48.341 "runtime": 1.362727, 00:14:48.341 "iops": 15570.98377004345, 00:14:48.341 "mibps": 1946.3729712554311, 00:14:48.341 "io_failed": 1, 00:14:48.341 "io_timeout": 0, 00:14:48.341 "avg_latency_us": 88.31311523189838, 00:14:48.341 "min_latency_us": 27.142168674698794, 00:14:48.341 "max_latency_us": 1506.8016064257029 00:14:48.341 } 00:14:48.341 ], 00:14:48.341 "core_count": 1 00:14:48.341 } 00:14:48.341 13:34:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.341 13:34:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 66870 00:14:48.341 13:34:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 66870 ']' 00:14:48.341 13:34:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 66870 00:14:48.341 13:34:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:14:48.341 13:34:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:48.341 13:34:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66870 00:14:48.341 13:34:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:48.341 13:34:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:48.341 killing process with pid 66870 00:14:48.341 13:34:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66870' 00:14:48.341 13:34:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 66870 00:14:48.341 [2024-11-20 13:34:47.621787] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:48.341 13:34:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 66870 00:14:48.600 [2024-11-20 13:34:47.867328] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:49.992 13:34:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.tM0eGvik6D 00:14:49.992 13:34:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:14:49.992 13:34:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:14:49.992 13:34:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:14:49.992 13:34:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:14:49.992 13:34:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:49.992 13:34:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:14:49.992 13:34:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:14:49.992 00:14:49.992 real 0m4.716s 00:14:49.992 user 0m5.577s 00:14:49.992 sys 0m0.593s 00:14:49.992 13:34:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:49.992 ************************************ 00:14:49.992 END TEST raid_read_error_test 00:14:49.992 ************************************ 00:14:49.992 13:34:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.992 13:34:49 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 3 write 00:14:49.992 13:34:49 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:14:49.992 13:34:49 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:49.992 13:34:49 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:49.992 ************************************ 00:14:49.992 START TEST raid_write_error_test 00:14:49.992 ************************************ 00:14:49.992 13:34:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 3 write 00:14:49.992 13:34:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:14:49.992 13:34:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:14:49.992 13:34:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:14:49.992 13:34:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:14:49.992 13:34:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:49.992 13:34:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:14:49.992 13:34:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:49.992 13:34:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:49.992 13:34:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:14:49.992 13:34:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:49.992 13:34:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:49.992 13:34:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:14:49.993 13:34:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:49.993 13:34:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:49.993 13:34:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:14:49.993 13:34:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:14:49.993 13:34:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:14:49.993 13:34:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:14:49.993 13:34:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:14:49.993 13:34:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:14:49.993 13:34:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:14:49.993 13:34:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:14:49.993 13:34:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:14:49.993 13:34:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:14:49.993 13:34:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:14:49.993 13:34:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.vT1g7DjhVJ 00:14:49.993 13:34:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=67011 00:14:49.993 13:34:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 67011 00:14:49.993 13:34:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 67011 ']' 00:14:49.993 13:34:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:14:49.993 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:49.993 13:34:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:49.993 13:34:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:49.993 13:34:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:49.993 13:34:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:49.993 13:34:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.993 [2024-11-20 13:34:49.348265] Starting SPDK v25.01-pre git sha1 82b85d9ca / DPDK 24.03.0 initialization... 00:14:49.993 [2024-11-20 13:34:49.348754] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67011 ] 00:14:50.258 [2024-11-20 13:34:49.525049] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:50.258 [2024-11-20 13:34:49.715961] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:50.526 [2024-11-20 13:34:49.939579] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:50.526 [2024-11-20 13:34:49.939650] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:50.794 13:34:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:50.794 13:34:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:14:50.794 13:34:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:50.794 13:34:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:50.794 13:34:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.794 13:34:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:51.067 BaseBdev1_malloc 00:14:51.067 13:34:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.067 13:34:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:14:51.067 13:34:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.067 13:34:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:51.067 true 00:14:51.067 13:34:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.067 13:34:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:14:51.067 13:34:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.067 13:34:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:51.067 [2024-11-20 13:34:50.299464] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:14:51.067 [2024-11-20 13:34:50.299535] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:51.067 [2024-11-20 13:34:50.299563] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:14:51.067 [2024-11-20 13:34:50.299581] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:51.067 [2024-11-20 13:34:50.302179] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:51.067 [2024-11-20 13:34:50.302231] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:51.067 BaseBdev1 00:14:51.067 13:34:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.067 13:34:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:51.067 13:34:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:51.067 13:34:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.067 13:34:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:51.067 BaseBdev2_malloc 00:14:51.067 13:34:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.067 13:34:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:14:51.067 13:34:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.067 13:34:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:51.067 true 00:14:51.067 13:34:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.067 13:34:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:14:51.067 13:34:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.067 13:34:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:51.067 [2024-11-20 13:34:50.368763] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:14:51.067 [2024-11-20 13:34:50.368832] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:51.068 [2024-11-20 13:34:50.368855] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:14:51.068 [2024-11-20 13:34:50.368873] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:51.068 [2024-11-20 13:34:50.371453] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:51.068 [2024-11-20 13:34:50.371504] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:51.068 BaseBdev2 00:14:51.068 13:34:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.068 13:34:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:51.068 13:34:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:14:51.068 13:34:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.068 13:34:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:51.068 BaseBdev3_malloc 00:14:51.068 13:34:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.068 13:34:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:14:51.068 13:34:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.068 13:34:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:51.068 true 00:14:51.068 13:34:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.068 13:34:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:14:51.068 13:34:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.068 13:34:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:51.068 [2024-11-20 13:34:50.452761] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:14:51.068 [2024-11-20 13:34:50.452829] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:51.068 [2024-11-20 13:34:50.452853] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:14:51.068 [2024-11-20 13:34:50.452871] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:51.068 [2024-11-20 13:34:50.455508] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:51.068 [2024-11-20 13:34:50.455702] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:14:51.068 BaseBdev3 00:14:51.068 13:34:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.068 13:34:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:14:51.068 13:34:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.068 13:34:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:51.068 [2024-11-20 13:34:50.464843] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:51.068 [2024-11-20 13:34:50.467119] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:51.068 [2024-11-20 13:34:50.467202] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:51.068 [2024-11-20 13:34:50.467416] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:14:51.068 [2024-11-20 13:34:50.467431] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:14:51.068 [2024-11-20 13:34:50.467724] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:14:51.068 [2024-11-20 13:34:50.467897] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:14:51.068 [2024-11-20 13:34:50.467916] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:14:51.068 [2024-11-20 13:34:50.468103] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:51.068 13:34:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.068 13:34:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:14:51.068 13:34:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:51.068 13:34:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:51.068 13:34:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:51.068 13:34:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:51.068 13:34:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:51.068 13:34:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:51.068 13:34:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:51.068 13:34:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:51.068 13:34:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:51.068 13:34:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:51.068 13:34:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.068 13:34:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:51.068 13:34:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:51.068 13:34:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.068 13:34:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:51.068 "name": "raid_bdev1", 00:14:51.068 "uuid": "e23df8d9-a08e-4c91-98dd-0ceeab7c3d66", 00:14:51.068 "strip_size_kb": 64, 00:14:51.068 "state": "online", 00:14:51.068 "raid_level": "concat", 00:14:51.068 "superblock": true, 00:14:51.068 "num_base_bdevs": 3, 00:14:51.068 "num_base_bdevs_discovered": 3, 00:14:51.068 "num_base_bdevs_operational": 3, 00:14:51.068 "base_bdevs_list": [ 00:14:51.068 { 00:14:51.068 "name": "BaseBdev1", 00:14:51.068 "uuid": "2eba8d5f-0350-57d3-bb24-4831e55f8f91", 00:14:51.068 "is_configured": true, 00:14:51.068 "data_offset": 2048, 00:14:51.068 "data_size": 63488 00:14:51.068 }, 00:14:51.068 { 00:14:51.068 "name": "BaseBdev2", 00:14:51.068 "uuid": "8f72e6ef-dc3f-5ce0-9e8a-556070e44c5a", 00:14:51.068 "is_configured": true, 00:14:51.068 "data_offset": 2048, 00:14:51.068 "data_size": 63488 00:14:51.068 }, 00:14:51.068 { 00:14:51.068 "name": "BaseBdev3", 00:14:51.068 "uuid": "012b29bc-4f9a-5aad-a58a-daa2044e0ec9", 00:14:51.068 "is_configured": true, 00:14:51.068 "data_offset": 2048, 00:14:51.068 "data_size": 63488 00:14:51.068 } 00:14:51.068 ] 00:14:51.068 }' 00:14:51.068 13:34:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:51.068 13:34:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:51.653 13:34:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:14:51.653 13:34:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:14:51.653 [2024-11-20 13:34:51.085239] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:14:52.587 13:34:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:14:52.587 13:34:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.587 13:34:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:52.587 13:34:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.587 13:34:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:14:52.587 13:34:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:14:52.587 13:34:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:14:52.587 13:34:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:14:52.587 13:34:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:52.587 13:34:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:52.587 13:34:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:52.587 13:34:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:52.587 13:34:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:52.587 13:34:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:52.587 13:34:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:52.587 13:34:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:52.587 13:34:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:52.587 13:34:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:52.587 13:34:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.587 13:34:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:52.587 13:34:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:52.587 13:34:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.587 13:34:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:52.587 "name": "raid_bdev1", 00:14:52.587 "uuid": "e23df8d9-a08e-4c91-98dd-0ceeab7c3d66", 00:14:52.587 "strip_size_kb": 64, 00:14:52.587 "state": "online", 00:14:52.587 "raid_level": "concat", 00:14:52.587 "superblock": true, 00:14:52.587 "num_base_bdevs": 3, 00:14:52.587 "num_base_bdevs_discovered": 3, 00:14:52.587 "num_base_bdevs_operational": 3, 00:14:52.587 "base_bdevs_list": [ 00:14:52.587 { 00:14:52.587 "name": "BaseBdev1", 00:14:52.587 "uuid": "2eba8d5f-0350-57d3-bb24-4831e55f8f91", 00:14:52.587 "is_configured": true, 00:14:52.587 "data_offset": 2048, 00:14:52.587 "data_size": 63488 00:14:52.587 }, 00:14:52.587 { 00:14:52.587 "name": "BaseBdev2", 00:14:52.587 "uuid": "8f72e6ef-dc3f-5ce0-9e8a-556070e44c5a", 00:14:52.587 "is_configured": true, 00:14:52.587 "data_offset": 2048, 00:14:52.587 "data_size": 63488 00:14:52.587 }, 00:14:52.587 { 00:14:52.587 "name": "BaseBdev3", 00:14:52.587 "uuid": "012b29bc-4f9a-5aad-a58a-daa2044e0ec9", 00:14:52.587 "is_configured": true, 00:14:52.587 "data_offset": 2048, 00:14:52.587 "data_size": 63488 00:14:52.587 } 00:14:52.587 ] 00:14:52.587 }' 00:14:52.587 13:34:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:52.587 13:34:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:53.153 13:34:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:53.153 13:34:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.153 13:34:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:53.153 [2024-11-20 13:34:52.383493] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:53.153 [2024-11-20 13:34:52.383668] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:53.153 [2024-11-20 13:34:52.386457] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:53.153 [2024-11-20 13:34:52.386640] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:53.153 [2024-11-20 13:34:52.386733] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:53.153 [2024-11-20 13:34:52.386855] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:14:53.153 { 00:14:53.153 "results": [ 00:14:53.153 { 00:14:53.153 "job": "raid_bdev1", 00:14:53.153 "core_mask": "0x1", 00:14:53.153 "workload": "randrw", 00:14:53.153 "percentage": 50, 00:14:53.153 "status": "finished", 00:14:53.153 "queue_depth": 1, 00:14:53.153 "io_size": 131072, 00:14:53.153 "runtime": 1.298594, 00:14:53.153 "iops": 15851.759672384133, 00:14:53.153 "mibps": 1981.4699590480166, 00:14:53.153 "io_failed": 1, 00:14:53.153 "io_timeout": 0, 00:14:53.153 "avg_latency_us": 86.66897056798065, 00:14:53.153 "min_latency_us": 27.759036144578314, 00:14:53.153 "max_latency_us": 1500.2216867469879 00:14:53.153 } 00:14:53.153 ], 00:14:53.153 "core_count": 1 00:14:53.153 } 00:14:53.153 13:34:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.153 13:34:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 67011 00:14:53.153 13:34:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 67011 ']' 00:14:53.153 13:34:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 67011 00:14:53.153 13:34:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:14:53.153 13:34:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:53.153 13:34:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67011 00:14:53.153 killing process with pid 67011 00:14:53.153 13:34:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:53.153 13:34:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:53.153 13:34:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67011' 00:14:53.153 13:34:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 67011 00:14:53.153 [2024-11-20 13:34:52.434949] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:53.153 13:34:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 67011 00:14:53.411 [2024-11-20 13:34:52.667501] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:54.788 13:34:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.vT1g7DjhVJ 00:14:54.788 13:34:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:14:54.788 13:34:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:14:54.788 ************************************ 00:14:54.788 END TEST raid_write_error_test 00:14:54.788 ************************************ 00:14:54.788 13:34:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.77 00:14:54.788 13:34:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:14:54.788 13:34:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:54.788 13:34:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:14:54.788 13:34:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.77 != \0\.\0\0 ]] 00:14:54.788 00:14:54.788 real 0m4.665s 00:14:54.788 user 0m5.579s 00:14:54.788 sys 0m0.620s 00:14:54.788 13:34:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:54.788 13:34:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:54.788 13:34:53 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:14:54.788 13:34:53 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 3 false 00:14:54.788 13:34:53 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:14:54.788 13:34:53 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:54.788 13:34:53 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:54.788 ************************************ 00:14:54.788 START TEST raid_state_function_test 00:14:54.788 ************************************ 00:14:54.788 13:34:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 3 false 00:14:54.788 13:34:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:14:54.788 13:34:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:14:54.788 13:34:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:14:54.788 13:34:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:14:54.788 13:34:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:14:54.788 13:34:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:54.788 13:34:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:14:54.788 13:34:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:54.788 13:34:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:54.788 13:34:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:14:54.788 13:34:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:54.788 13:34:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:54.788 13:34:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:14:54.788 13:34:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:54.788 13:34:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:54.788 13:34:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:14:54.788 13:34:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:14:54.788 13:34:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:14:54.788 13:34:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:14:54.788 13:34:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:14:54.788 13:34:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:14:54.788 13:34:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:14:54.788 13:34:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:14:54.788 13:34:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:14:54.788 13:34:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:14:54.788 13:34:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=67160 00:14:54.788 Process raid pid: 67160 00:14:54.788 13:34:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 67160' 00:14:54.788 13:34:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 67160 00:14:54.788 13:34:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 67160 ']' 00:14:54.788 13:34:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:54.788 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:54.788 13:34:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:54.788 13:34:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:14:54.788 13:34:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:54.788 13:34:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:54.788 13:34:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:54.788 [2024-11-20 13:34:54.067818] Starting SPDK v25.01-pre git sha1 82b85d9ca / DPDK 24.03.0 initialization... 00:14:54.788 [2024-11-20 13:34:54.067940] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:54.788 [2024-11-20 13:34:54.251363] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:55.047 [2024-11-20 13:34:54.369576] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:55.306 [2024-11-20 13:34:54.589328] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:55.306 [2024-11-20 13:34:54.589373] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:55.584 13:34:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:55.584 13:34:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:14:55.584 13:34:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:55.584 13:34:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.584 13:34:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.584 [2024-11-20 13:34:54.952461] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:55.584 [2024-11-20 13:34:54.952526] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:55.584 [2024-11-20 13:34:54.952542] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:55.584 [2024-11-20 13:34:54.952559] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:55.584 [2024-11-20 13:34:54.952569] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:55.584 [2024-11-20 13:34:54.952585] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:55.584 13:34:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.584 13:34:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:14:55.584 13:34:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:55.584 13:34:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:55.584 13:34:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:55.584 13:34:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:55.584 13:34:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:55.584 13:34:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:55.584 13:34:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:55.584 13:34:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:55.584 13:34:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:55.584 13:34:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:55.584 13:34:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:55.584 13:34:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.584 13:34:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.584 13:34:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.584 13:34:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:55.584 "name": "Existed_Raid", 00:14:55.584 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:55.584 "strip_size_kb": 0, 00:14:55.584 "state": "configuring", 00:14:55.584 "raid_level": "raid1", 00:14:55.584 "superblock": false, 00:14:55.584 "num_base_bdevs": 3, 00:14:55.584 "num_base_bdevs_discovered": 0, 00:14:55.584 "num_base_bdevs_operational": 3, 00:14:55.584 "base_bdevs_list": [ 00:14:55.584 { 00:14:55.584 "name": "BaseBdev1", 00:14:55.584 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:55.584 "is_configured": false, 00:14:55.584 "data_offset": 0, 00:14:55.584 "data_size": 0 00:14:55.584 }, 00:14:55.584 { 00:14:55.584 "name": "BaseBdev2", 00:14:55.584 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:55.584 "is_configured": false, 00:14:55.584 "data_offset": 0, 00:14:55.584 "data_size": 0 00:14:55.584 }, 00:14:55.584 { 00:14:55.584 "name": "BaseBdev3", 00:14:55.584 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:55.584 "is_configured": false, 00:14:55.584 "data_offset": 0, 00:14:55.584 "data_size": 0 00:14:55.584 } 00:14:55.584 ] 00:14:55.584 }' 00:14:55.585 13:34:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:55.585 13:34:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.153 13:34:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:56.153 13:34:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.153 13:34:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.153 [2024-11-20 13:34:55.407813] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:56.153 [2024-11-20 13:34:55.407856] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:14:56.153 13:34:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.153 13:34:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:56.153 13:34:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.153 13:34:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.153 [2024-11-20 13:34:55.419779] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:56.153 [2024-11-20 13:34:55.419977] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:56.153 [2024-11-20 13:34:55.420018] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:56.153 [2024-11-20 13:34:55.420036] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:56.153 [2024-11-20 13:34:55.420046] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:56.153 [2024-11-20 13:34:55.420062] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:56.153 13:34:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.153 13:34:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:56.153 13:34:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.153 13:34:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.153 [2024-11-20 13:34:55.466755] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:56.153 BaseBdev1 00:14:56.153 13:34:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.153 13:34:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:14:56.153 13:34:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:14:56.153 13:34:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:56.153 13:34:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:56.153 13:34:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:56.153 13:34:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:56.153 13:34:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:56.153 13:34:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.153 13:34:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.153 13:34:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.153 13:34:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:56.153 13:34:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.153 13:34:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.153 [ 00:14:56.153 { 00:14:56.153 "name": "BaseBdev1", 00:14:56.153 "aliases": [ 00:14:56.153 "c487065c-2ace-4e79-9499-b69341d2b032" 00:14:56.153 ], 00:14:56.153 "product_name": "Malloc disk", 00:14:56.153 "block_size": 512, 00:14:56.153 "num_blocks": 65536, 00:14:56.153 "uuid": "c487065c-2ace-4e79-9499-b69341d2b032", 00:14:56.153 "assigned_rate_limits": { 00:14:56.153 "rw_ios_per_sec": 0, 00:14:56.153 "rw_mbytes_per_sec": 0, 00:14:56.153 "r_mbytes_per_sec": 0, 00:14:56.154 "w_mbytes_per_sec": 0 00:14:56.154 }, 00:14:56.154 "claimed": true, 00:14:56.154 "claim_type": "exclusive_write", 00:14:56.154 "zoned": false, 00:14:56.154 "supported_io_types": { 00:14:56.154 "read": true, 00:14:56.154 "write": true, 00:14:56.154 "unmap": true, 00:14:56.154 "flush": true, 00:14:56.154 "reset": true, 00:14:56.154 "nvme_admin": false, 00:14:56.154 "nvme_io": false, 00:14:56.154 "nvme_io_md": false, 00:14:56.154 "write_zeroes": true, 00:14:56.154 "zcopy": true, 00:14:56.154 "get_zone_info": false, 00:14:56.154 "zone_management": false, 00:14:56.154 "zone_append": false, 00:14:56.154 "compare": false, 00:14:56.154 "compare_and_write": false, 00:14:56.154 "abort": true, 00:14:56.154 "seek_hole": false, 00:14:56.154 "seek_data": false, 00:14:56.154 "copy": true, 00:14:56.154 "nvme_iov_md": false 00:14:56.154 }, 00:14:56.154 "memory_domains": [ 00:14:56.154 { 00:14:56.154 "dma_device_id": "system", 00:14:56.154 "dma_device_type": 1 00:14:56.154 }, 00:14:56.154 { 00:14:56.154 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:56.154 "dma_device_type": 2 00:14:56.154 } 00:14:56.154 ], 00:14:56.154 "driver_specific": {} 00:14:56.154 } 00:14:56.154 ] 00:14:56.154 13:34:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.154 13:34:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:56.154 13:34:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:14:56.154 13:34:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:56.154 13:34:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:56.154 13:34:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:56.154 13:34:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:56.154 13:34:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:56.154 13:34:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:56.154 13:34:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:56.154 13:34:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:56.154 13:34:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:56.154 13:34:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:56.154 13:34:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:56.154 13:34:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.154 13:34:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.154 13:34:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.154 13:34:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:56.154 "name": "Existed_Raid", 00:14:56.154 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:56.154 "strip_size_kb": 0, 00:14:56.154 "state": "configuring", 00:14:56.154 "raid_level": "raid1", 00:14:56.154 "superblock": false, 00:14:56.154 "num_base_bdevs": 3, 00:14:56.154 "num_base_bdevs_discovered": 1, 00:14:56.154 "num_base_bdevs_operational": 3, 00:14:56.154 "base_bdevs_list": [ 00:14:56.154 { 00:14:56.154 "name": "BaseBdev1", 00:14:56.154 "uuid": "c487065c-2ace-4e79-9499-b69341d2b032", 00:14:56.154 "is_configured": true, 00:14:56.154 "data_offset": 0, 00:14:56.154 "data_size": 65536 00:14:56.154 }, 00:14:56.154 { 00:14:56.154 "name": "BaseBdev2", 00:14:56.154 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:56.154 "is_configured": false, 00:14:56.154 "data_offset": 0, 00:14:56.154 "data_size": 0 00:14:56.154 }, 00:14:56.154 { 00:14:56.154 "name": "BaseBdev3", 00:14:56.154 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:56.154 "is_configured": false, 00:14:56.154 "data_offset": 0, 00:14:56.154 "data_size": 0 00:14:56.154 } 00:14:56.154 ] 00:14:56.154 }' 00:14:56.154 13:34:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:56.154 13:34:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.738 13:34:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:56.738 13:34:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.738 13:34:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.738 [2024-11-20 13:34:55.930460] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:56.738 [2024-11-20 13:34:55.930705] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:14:56.738 13:34:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.738 13:34:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:56.738 13:34:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.738 13:34:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.738 [2024-11-20 13:34:55.942501] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:56.738 [2024-11-20 13:34:55.944732] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:56.738 [2024-11-20 13:34:55.944788] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:56.739 [2024-11-20 13:34:55.944803] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:56.739 [2024-11-20 13:34:55.944818] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:56.739 13:34:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.739 13:34:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:14:56.739 13:34:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:56.739 13:34:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:14:56.739 13:34:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:56.739 13:34:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:56.739 13:34:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:56.739 13:34:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:56.739 13:34:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:56.739 13:34:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:56.739 13:34:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:56.739 13:34:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:56.739 13:34:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:56.739 13:34:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:56.739 13:34:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:56.739 13:34:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.739 13:34:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.739 13:34:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.739 13:34:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:56.739 "name": "Existed_Raid", 00:14:56.739 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:56.739 "strip_size_kb": 0, 00:14:56.739 "state": "configuring", 00:14:56.739 "raid_level": "raid1", 00:14:56.739 "superblock": false, 00:14:56.739 "num_base_bdevs": 3, 00:14:56.739 "num_base_bdevs_discovered": 1, 00:14:56.739 "num_base_bdevs_operational": 3, 00:14:56.739 "base_bdevs_list": [ 00:14:56.739 { 00:14:56.739 "name": "BaseBdev1", 00:14:56.739 "uuid": "c487065c-2ace-4e79-9499-b69341d2b032", 00:14:56.739 "is_configured": true, 00:14:56.739 "data_offset": 0, 00:14:56.739 "data_size": 65536 00:14:56.739 }, 00:14:56.739 { 00:14:56.739 "name": "BaseBdev2", 00:14:56.739 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:56.739 "is_configured": false, 00:14:56.739 "data_offset": 0, 00:14:56.739 "data_size": 0 00:14:56.739 }, 00:14:56.739 { 00:14:56.739 "name": "BaseBdev3", 00:14:56.739 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:56.739 "is_configured": false, 00:14:56.739 "data_offset": 0, 00:14:56.739 "data_size": 0 00:14:56.739 } 00:14:56.739 ] 00:14:56.739 }' 00:14:56.739 13:34:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:56.739 13:34:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:57.004 13:34:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:57.004 13:34:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.004 13:34:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:57.263 [2024-11-20 13:34:56.484916] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:57.263 BaseBdev2 00:14:57.263 13:34:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.263 13:34:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:14:57.263 13:34:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:14:57.263 13:34:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:57.263 13:34:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:57.263 13:34:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:57.263 13:34:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:57.263 13:34:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:57.263 13:34:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.263 13:34:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:57.263 13:34:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.263 13:34:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:57.263 13:34:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.263 13:34:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:57.263 [ 00:14:57.263 { 00:14:57.263 "name": "BaseBdev2", 00:14:57.263 "aliases": [ 00:14:57.263 "7bd18439-1ba9-4fbe-84bf-78b8250ebcb3" 00:14:57.263 ], 00:14:57.263 "product_name": "Malloc disk", 00:14:57.263 "block_size": 512, 00:14:57.263 "num_blocks": 65536, 00:14:57.263 "uuid": "7bd18439-1ba9-4fbe-84bf-78b8250ebcb3", 00:14:57.263 "assigned_rate_limits": { 00:14:57.263 "rw_ios_per_sec": 0, 00:14:57.263 "rw_mbytes_per_sec": 0, 00:14:57.263 "r_mbytes_per_sec": 0, 00:14:57.263 "w_mbytes_per_sec": 0 00:14:57.263 }, 00:14:57.263 "claimed": true, 00:14:57.263 "claim_type": "exclusive_write", 00:14:57.263 "zoned": false, 00:14:57.263 "supported_io_types": { 00:14:57.263 "read": true, 00:14:57.263 "write": true, 00:14:57.263 "unmap": true, 00:14:57.263 "flush": true, 00:14:57.263 "reset": true, 00:14:57.263 "nvme_admin": false, 00:14:57.263 "nvme_io": false, 00:14:57.263 "nvme_io_md": false, 00:14:57.263 "write_zeroes": true, 00:14:57.263 "zcopy": true, 00:14:57.263 "get_zone_info": false, 00:14:57.263 "zone_management": false, 00:14:57.263 "zone_append": false, 00:14:57.263 "compare": false, 00:14:57.263 "compare_and_write": false, 00:14:57.263 "abort": true, 00:14:57.263 "seek_hole": false, 00:14:57.263 "seek_data": false, 00:14:57.263 "copy": true, 00:14:57.263 "nvme_iov_md": false 00:14:57.263 }, 00:14:57.263 "memory_domains": [ 00:14:57.263 { 00:14:57.263 "dma_device_id": "system", 00:14:57.263 "dma_device_type": 1 00:14:57.263 }, 00:14:57.263 { 00:14:57.263 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:57.263 "dma_device_type": 2 00:14:57.263 } 00:14:57.263 ], 00:14:57.263 "driver_specific": {} 00:14:57.263 } 00:14:57.263 ] 00:14:57.263 13:34:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.263 13:34:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:57.263 13:34:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:57.263 13:34:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:57.263 13:34:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:14:57.263 13:34:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:57.263 13:34:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:57.263 13:34:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:57.263 13:34:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:57.263 13:34:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:57.263 13:34:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:57.263 13:34:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:57.263 13:34:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:57.263 13:34:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:57.263 13:34:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:57.263 13:34:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:57.263 13:34:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.263 13:34:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:57.263 13:34:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.263 13:34:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:57.263 "name": "Existed_Raid", 00:14:57.263 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:57.263 "strip_size_kb": 0, 00:14:57.263 "state": "configuring", 00:14:57.263 "raid_level": "raid1", 00:14:57.263 "superblock": false, 00:14:57.263 "num_base_bdevs": 3, 00:14:57.263 "num_base_bdevs_discovered": 2, 00:14:57.263 "num_base_bdevs_operational": 3, 00:14:57.263 "base_bdevs_list": [ 00:14:57.263 { 00:14:57.263 "name": "BaseBdev1", 00:14:57.263 "uuid": "c487065c-2ace-4e79-9499-b69341d2b032", 00:14:57.263 "is_configured": true, 00:14:57.263 "data_offset": 0, 00:14:57.263 "data_size": 65536 00:14:57.263 }, 00:14:57.263 { 00:14:57.263 "name": "BaseBdev2", 00:14:57.263 "uuid": "7bd18439-1ba9-4fbe-84bf-78b8250ebcb3", 00:14:57.263 "is_configured": true, 00:14:57.263 "data_offset": 0, 00:14:57.263 "data_size": 65536 00:14:57.263 }, 00:14:57.263 { 00:14:57.263 "name": "BaseBdev3", 00:14:57.263 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:57.263 "is_configured": false, 00:14:57.263 "data_offset": 0, 00:14:57.263 "data_size": 0 00:14:57.263 } 00:14:57.263 ] 00:14:57.263 }' 00:14:57.263 13:34:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:57.263 13:34:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:57.523 13:34:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:57.523 13:34:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.523 13:34:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:57.783 [2024-11-20 13:34:57.018186] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:57.783 [2024-11-20 13:34:57.018241] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:14:57.783 [2024-11-20 13:34:57.018258] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:14:57.783 [2024-11-20 13:34:57.018595] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:14:57.783 [2024-11-20 13:34:57.018797] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:14:57.783 [2024-11-20 13:34:57.018809] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:14:57.783 [2024-11-20 13:34:57.019154] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:57.783 BaseBdev3 00:14:57.783 13:34:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.783 13:34:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:14:57.783 13:34:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:14:57.783 13:34:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:57.783 13:34:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:57.783 13:34:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:57.783 13:34:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:57.783 13:34:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:57.783 13:34:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.783 13:34:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:57.783 13:34:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.783 13:34:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:57.783 13:34:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.783 13:34:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:57.783 [ 00:14:57.783 { 00:14:57.783 "name": "BaseBdev3", 00:14:57.783 "aliases": [ 00:14:57.783 "3a8c8db4-7cf2-4208-8fa1-3d5af1c53294" 00:14:57.783 ], 00:14:57.783 "product_name": "Malloc disk", 00:14:57.783 "block_size": 512, 00:14:57.783 "num_blocks": 65536, 00:14:57.783 "uuid": "3a8c8db4-7cf2-4208-8fa1-3d5af1c53294", 00:14:57.783 "assigned_rate_limits": { 00:14:57.783 "rw_ios_per_sec": 0, 00:14:57.783 "rw_mbytes_per_sec": 0, 00:14:57.783 "r_mbytes_per_sec": 0, 00:14:57.783 "w_mbytes_per_sec": 0 00:14:57.783 }, 00:14:57.783 "claimed": true, 00:14:57.783 "claim_type": "exclusive_write", 00:14:57.783 "zoned": false, 00:14:57.783 "supported_io_types": { 00:14:57.783 "read": true, 00:14:57.783 "write": true, 00:14:57.783 "unmap": true, 00:14:57.783 "flush": true, 00:14:57.783 "reset": true, 00:14:57.783 "nvme_admin": false, 00:14:57.783 "nvme_io": false, 00:14:57.783 "nvme_io_md": false, 00:14:57.783 "write_zeroes": true, 00:14:57.783 "zcopy": true, 00:14:57.783 "get_zone_info": false, 00:14:57.783 "zone_management": false, 00:14:57.783 "zone_append": false, 00:14:57.783 "compare": false, 00:14:57.783 "compare_and_write": false, 00:14:57.783 "abort": true, 00:14:57.783 "seek_hole": false, 00:14:57.783 "seek_data": false, 00:14:57.783 "copy": true, 00:14:57.783 "nvme_iov_md": false 00:14:57.783 }, 00:14:57.783 "memory_domains": [ 00:14:57.783 { 00:14:57.783 "dma_device_id": "system", 00:14:57.783 "dma_device_type": 1 00:14:57.783 }, 00:14:57.783 { 00:14:57.783 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:57.783 "dma_device_type": 2 00:14:57.783 } 00:14:57.783 ], 00:14:57.783 "driver_specific": {} 00:14:57.783 } 00:14:57.783 ] 00:14:57.783 13:34:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.783 13:34:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:57.783 13:34:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:57.783 13:34:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:57.783 13:34:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:14:57.783 13:34:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:57.783 13:34:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:57.783 13:34:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:57.783 13:34:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:57.783 13:34:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:57.783 13:34:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:57.783 13:34:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:57.783 13:34:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:57.783 13:34:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:57.783 13:34:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:57.783 13:34:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:57.783 13:34:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.783 13:34:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:57.783 13:34:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.783 13:34:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:57.783 "name": "Existed_Raid", 00:14:57.783 "uuid": "6d70e5c0-136c-4619-8480-dc0910ab1ecc", 00:14:57.783 "strip_size_kb": 0, 00:14:57.783 "state": "online", 00:14:57.783 "raid_level": "raid1", 00:14:57.783 "superblock": false, 00:14:57.783 "num_base_bdevs": 3, 00:14:57.783 "num_base_bdevs_discovered": 3, 00:14:57.783 "num_base_bdevs_operational": 3, 00:14:57.783 "base_bdevs_list": [ 00:14:57.783 { 00:14:57.783 "name": "BaseBdev1", 00:14:57.783 "uuid": "c487065c-2ace-4e79-9499-b69341d2b032", 00:14:57.783 "is_configured": true, 00:14:57.783 "data_offset": 0, 00:14:57.783 "data_size": 65536 00:14:57.783 }, 00:14:57.783 { 00:14:57.783 "name": "BaseBdev2", 00:14:57.783 "uuid": "7bd18439-1ba9-4fbe-84bf-78b8250ebcb3", 00:14:57.783 "is_configured": true, 00:14:57.783 "data_offset": 0, 00:14:57.783 "data_size": 65536 00:14:57.783 }, 00:14:57.783 { 00:14:57.783 "name": "BaseBdev3", 00:14:57.783 "uuid": "3a8c8db4-7cf2-4208-8fa1-3d5af1c53294", 00:14:57.783 "is_configured": true, 00:14:57.783 "data_offset": 0, 00:14:57.783 "data_size": 65536 00:14:57.783 } 00:14:57.783 ] 00:14:57.783 }' 00:14:57.783 13:34:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:57.783 13:34:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:58.351 13:34:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:14:58.351 13:34:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:58.351 13:34:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:58.351 13:34:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:58.351 13:34:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:58.351 13:34:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:58.351 13:34:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:58.351 13:34:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.351 13:34:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:58.351 13:34:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:58.351 [2024-11-20 13:34:57.549919] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:58.351 13:34:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.351 13:34:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:58.351 "name": "Existed_Raid", 00:14:58.351 "aliases": [ 00:14:58.351 "6d70e5c0-136c-4619-8480-dc0910ab1ecc" 00:14:58.351 ], 00:14:58.351 "product_name": "Raid Volume", 00:14:58.351 "block_size": 512, 00:14:58.351 "num_blocks": 65536, 00:14:58.351 "uuid": "6d70e5c0-136c-4619-8480-dc0910ab1ecc", 00:14:58.351 "assigned_rate_limits": { 00:14:58.351 "rw_ios_per_sec": 0, 00:14:58.351 "rw_mbytes_per_sec": 0, 00:14:58.351 "r_mbytes_per_sec": 0, 00:14:58.351 "w_mbytes_per_sec": 0 00:14:58.351 }, 00:14:58.351 "claimed": false, 00:14:58.351 "zoned": false, 00:14:58.351 "supported_io_types": { 00:14:58.351 "read": true, 00:14:58.351 "write": true, 00:14:58.351 "unmap": false, 00:14:58.351 "flush": false, 00:14:58.351 "reset": true, 00:14:58.351 "nvme_admin": false, 00:14:58.351 "nvme_io": false, 00:14:58.351 "nvme_io_md": false, 00:14:58.351 "write_zeroes": true, 00:14:58.351 "zcopy": false, 00:14:58.351 "get_zone_info": false, 00:14:58.351 "zone_management": false, 00:14:58.351 "zone_append": false, 00:14:58.351 "compare": false, 00:14:58.351 "compare_and_write": false, 00:14:58.351 "abort": false, 00:14:58.351 "seek_hole": false, 00:14:58.351 "seek_data": false, 00:14:58.351 "copy": false, 00:14:58.351 "nvme_iov_md": false 00:14:58.351 }, 00:14:58.351 "memory_domains": [ 00:14:58.351 { 00:14:58.351 "dma_device_id": "system", 00:14:58.351 "dma_device_type": 1 00:14:58.351 }, 00:14:58.351 { 00:14:58.351 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:58.351 "dma_device_type": 2 00:14:58.351 }, 00:14:58.351 { 00:14:58.351 "dma_device_id": "system", 00:14:58.351 "dma_device_type": 1 00:14:58.351 }, 00:14:58.351 { 00:14:58.351 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:58.351 "dma_device_type": 2 00:14:58.351 }, 00:14:58.351 { 00:14:58.351 "dma_device_id": "system", 00:14:58.351 "dma_device_type": 1 00:14:58.351 }, 00:14:58.351 { 00:14:58.351 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:58.351 "dma_device_type": 2 00:14:58.351 } 00:14:58.351 ], 00:14:58.351 "driver_specific": { 00:14:58.351 "raid": { 00:14:58.351 "uuid": "6d70e5c0-136c-4619-8480-dc0910ab1ecc", 00:14:58.351 "strip_size_kb": 0, 00:14:58.351 "state": "online", 00:14:58.351 "raid_level": "raid1", 00:14:58.351 "superblock": false, 00:14:58.351 "num_base_bdevs": 3, 00:14:58.351 "num_base_bdevs_discovered": 3, 00:14:58.351 "num_base_bdevs_operational": 3, 00:14:58.351 "base_bdevs_list": [ 00:14:58.351 { 00:14:58.351 "name": "BaseBdev1", 00:14:58.351 "uuid": "c487065c-2ace-4e79-9499-b69341d2b032", 00:14:58.351 "is_configured": true, 00:14:58.351 "data_offset": 0, 00:14:58.351 "data_size": 65536 00:14:58.351 }, 00:14:58.351 { 00:14:58.351 "name": "BaseBdev2", 00:14:58.351 "uuid": "7bd18439-1ba9-4fbe-84bf-78b8250ebcb3", 00:14:58.351 "is_configured": true, 00:14:58.351 "data_offset": 0, 00:14:58.351 "data_size": 65536 00:14:58.351 }, 00:14:58.351 { 00:14:58.351 "name": "BaseBdev3", 00:14:58.351 "uuid": "3a8c8db4-7cf2-4208-8fa1-3d5af1c53294", 00:14:58.351 "is_configured": true, 00:14:58.351 "data_offset": 0, 00:14:58.351 "data_size": 65536 00:14:58.351 } 00:14:58.351 ] 00:14:58.351 } 00:14:58.351 } 00:14:58.351 }' 00:14:58.351 13:34:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:58.351 13:34:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:14:58.351 BaseBdev2 00:14:58.351 BaseBdev3' 00:14:58.351 13:34:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:58.351 13:34:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:58.351 13:34:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:58.351 13:34:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:14:58.352 13:34:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.352 13:34:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:58.352 13:34:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:58.352 13:34:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.352 13:34:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:58.352 13:34:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:58.352 13:34:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:58.352 13:34:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:58.352 13:34:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.352 13:34:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:58.352 13:34:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:58.352 13:34:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.352 13:34:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:58.352 13:34:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:58.352 13:34:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:58.352 13:34:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:58.352 13:34:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.352 13:34:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:58.352 13:34:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:58.352 13:34:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.352 13:34:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:58.352 13:34:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:58.352 13:34:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:58.352 13:34:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.352 13:34:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:58.352 [2024-11-20 13:34:57.821259] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:58.610 13:34:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.610 13:34:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:14:58.610 13:34:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:14:58.610 13:34:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:58.610 13:34:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:14:58.610 13:34:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:14:58.610 13:34:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:14:58.610 13:34:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:58.610 13:34:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:58.610 13:34:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:58.610 13:34:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:58.610 13:34:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:58.610 13:34:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:58.610 13:34:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:58.610 13:34:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:58.610 13:34:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:58.610 13:34:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:58.610 13:34:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.610 13:34:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:58.610 13:34:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:58.610 13:34:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.610 13:34:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:58.610 "name": "Existed_Raid", 00:14:58.610 "uuid": "6d70e5c0-136c-4619-8480-dc0910ab1ecc", 00:14:58.610 "strip_size_kb": 0, 00:14:58.610 "state": "online", 00:14:58.610 "raid_level": "raid1", 00:14:58.610 "superblock": false, 00:14:58.610 "num_base_bdevs": 3, 00:14:58.610 "num_base_bdevs_discovered": 2, 00:14:58.610 "num_base_bdevs_operational": 2, 00:14:58.610 "base_bdevs_list": [ 00:14:58.610 { 00:14:58.610 "name": null, 00:14:58.610 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:58.610 "is_configured": false, 00:14:58.610 "data_offset": 0, 00:14:58.610 "data_size": 65536 00:14:58.610 }, 00:14:58.610 { 00:14:58.610 "name": "BaseBdev2", 00:14:58.610 "uuid": "7bd18439-1ba9-4fbe-84bf-78b8250ebcb3", 00:14:58.610 "is_configured": true, 00:14:58.610 "data_offset": 0, 00:14:58.610 "data_size": 65536 00:14:58.610 }, 00:14:58.610 { 00:14:58.610 "name": "BaseBdev3", 00:14:58.610 "uuid": "3a8c8db4-7cf2-4208-8fa1-3d5af1c53294", 00:14:58.610 "is_configured": true, 00:14:58.610 "data_offset": 0, 00:14:58.610 "data_size": 65536 00:14:58.610 } 00:14:58.610 ] 00:14:58.610 }' 00:14:58.610 13:34:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:58.610 13:34:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:58.868 13:34:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:14:58.868 13:34:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:58.868 13:34:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:58.868 13:34:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:58.868 13:34:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.868 13:34:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:59.126 13:34:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.126 13:34:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:59.126 13:34:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:59.126 13:34:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:14:59.126 13:34:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.126 13:34:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:59.126 [2024-11-20 13:34:58.401780] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:59.126 13:34:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.126 13:34:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:59.126 13:34:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:59.126 13:34:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:59.126 13:34:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:59.126 13:34:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.126 13:34:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:59.126 13:34:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.126 13:34:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:59.126 13:34:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:59.126 13:34:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:14:59.126 13:34:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.126 13:34:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:59.126 [2024-11-20 13:34:58.561279] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:59.126 [2024-11-20 13:34:58.561390] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:59.384 [2024-11-20 13:34:58.663368] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:59.384 [2024-11-20 13:34:58.663433] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:59.385 [2024-11-20 13:34:58.663452] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:14:59.385 13:34:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.385 13:34:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:59.385 13:34:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:59.385 13:34:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:59.385 13:34:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:14:59.385 13:34:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.385 13:34:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:59.385 13:34:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.385 13:34:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:14:59.385 13:34:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:14:59.385 13:34:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:14:59.385 13:34:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:14:59.385 13:34:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:59.385 13:34:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:59.385 13:34:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.385 13:34:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:59.385 BaseBdev2 00:14:59.385 13:34:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.385 13:34:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:14:59.385 13:34:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:14:59.385 13:34:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:59.385 13:34:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:59.385 13:34:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:59.385 13:34:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:59.385 13:34:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:59.385 13:34:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.385 13:34:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:59.385 13:34:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.385 13:34:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:59.385 13:34:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.385 13:34:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:59.385 [ 00:14:59.385 { 00:14:59.385 "name": "BaseBdev2", 00:14:59.385 "aliases": [ 00:14:59.385 "840761bc-7ee1-4352-b8e9-7fed0e12548a" 00:14:59.385 ], 00:14:59.385 "product_name": "Malloc disk", 00:14:59.385 "block_size": 512, 00:14:59.385 "num_blocks": 65536, 00:14:59.385 "uuid": "840761bc-7ee1-4352-b8e9-7fed0e12548a", 00:14:59.385 "assigned_rate_limits": { 00:14:59.385 "rw_ios_per_sec": 0, 00:14:59.385 "rw_mbytes_per_sec": 0, 00:14:59.385 "r_mbytes_per_sec": 0, 00:14:59.385 "w_mbytes_per_sec": 0 00:14:59.385 }, 00:14:59.385 "claimed": false, 00:14:59.385 "zoned": false, 00:14:59.385 "supported_io_types": { 00:14:59.385 "read": true, 00:14:59.385 "write": true, 00:14:59.385 "unmap": true, 00:14:59.385 "flush": true, 00:14:59.385 "reset": true, 00:14:59.385 "nvme_admin": false, 00:14:59.385 "nvme_io": false, 00:14:59.385 "nvme_io_md": false, 00:14:59.385 "write_zeroes": true, 00:14:59.385 "zcopy": true, 00:14:59.385 "get_zone_info": false, 00:14:59.385 "zone_management": false, 00:14:59.385 "zone_append": false, 00:14:59.385 "compare": false, 00:14:59.385 "compare_and_write": false, 00:14:59.385 "abort": true, 00:14:59.385 "seek_hole": false, 00:14:59.385 "seek_data": false, 00:14:59.385 "copy": true, 00:14:59.385 "nvme_iov_md": false 00:14:59.385 }, 00:14:59.385 "memory_domains": [ 00:14:59.385 { 00:14:59.385 "dma_device_id": "system", 00:14:59.385 "dma_device_type": 1 00:14:59.385 }, 00:14:59.385 { 00:14:59.385 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:59.385 "dma_device_type": 2 00:14:59.385 } 00:14:59.385 ], 00:14:59.385 "driver_specific": {} 00:14:59.385 } 00:14:59.385 ] 00:14:59.385 13:34:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.385 13:34:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:59.385 13:34:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:59.385 13:34:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:59.385 13:34:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:59.385 13:34:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.385 13:34:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:59.385 BaseBdev3 00:14:59.385 13:34:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.385 13:34:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:14:59.385 13:34:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:14:59.385 13:34:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:59.385 13:34:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:59.385 13:34:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:59.385 13:34:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:59.385 13:34:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:59.385 13:34:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.385 13:34:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:59.385 13:34:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.385 13:34:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:59.385 13:34:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.385 13:34:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:59.643 [ 00:14:59.643 { 00:14:59.643 "name": "BaseBdev3", 00:14:59.643 "aliases": [ 00:14:59.643 "68561f6e-1142-4cc5-a987-723157c739b8" 00:14:59.643 ], 00:14:59.643 "product_name": "Malloc disk", 00:14:59.643 "block_size": 512, 00:14:59.643 "num_blocks": 65536, 00:14:59.643 "uuid": "68561f6e-1142-4cc5-a987-723157c739b8", 00:14:59.643 "assigned_rate_limits": { 00:14:59.643 "rw_ios_per_sec": 0, 00:14:59.643 "rw_mbytes_per_sec": 0, 00:14:59.643 "r_mbytes_per_sec": 0, 00:14:59.643 "w_mbytes_per_sec": 0 00:14:59.643 }, 00:14:59.643 "claimed": false, 00:14:59.643 "zoned": false, 00:14:59.643 "supported_io_types": { 00:14:59.643 "read": true, 00:14:59.643 "write": true, 00:14:59.643 "unmap": true, 00:14:59.643 "flush": true, 00:14:59.643 "reset": true, 00:14:59.643 "nvme_admin": false, 00:14:59.643 "nvme_io": false, 00:14:59.643 "nvme_io_md": false, 00:14:59.643 "write_zeroes": true, 00:14:59.643 "zcopy": true, 00:14:59.643 "get_zone_info": false, 00:14:59.643 "zone_management": false, 00:14:59.643 "zone_append": false, 00:14:59.643 "compare": false, 00:14:59.643 "compare_and_write": false, 00:14:59.643 "abort": true, 00:14:59.643 "seek_hole": false, 00:14:59.643 "seek_data": false, 00:14:59.643 "copy": true, 00:14:59.643 "nvme_iov_md": false 00:14:59.643 }, 00:14:59.643 "memory_domains": [ 00:14:59.643 { 00:14:59.643 "dma_device_id": "system", 00:14:59.643 "dma_device_type": 1 00:14:59.643 }, 00:14:59.643 { 00:14:59.643 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:59.643 "dma_device_type": 2 00:14:59.643 } 00:14:59.643 ], 00:14:59.643 "driver_specific": {} 00:14:59.643 } 00:14:59.643 ] 00:14:59.643 13:34:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.643 13:34:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:59.643 13:34:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:59.643 13:34:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:59.643 13:34:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:59.643 13:34:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.643 13:34:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:59.643 [2024-11-20 13:34:58.898212] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:59.644 [2024-11-20 13:34:58.898270] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:59.644 [2024-11-20 13:34:58.898306] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:59.644 [2024-11-20 13:34:58.900706] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:59.644 13:34:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.644 13:34:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:14:59.644 13:34:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:59.644 13:34:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:59.644 13:34:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:59.644 13:34:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:59.644 13:34:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:59.644 13:34:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:59.644 13:34:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:59.644 13:34:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:59.644 13:34:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:59.644 13:34:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:59.644 13:34:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.644 13:34:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:59.644 13:34:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:59.644 13:34:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.644 13:34:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:59.644 "name": "Existed_Raid", 00:14:59.644 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:59.644 "strip_size_kb": 0, 00:14:59.644 "state": "configuring", 00:14:59.644 "raid_level": "raid1", 00:14:59.644 "superblock": false, 00:14:59.644 "num_base_bdevs": 3, 00:14:59.644 "num_base_bdevs_discovered": 2, 00:14:59.644 "num_base_bdevs_operational": 3, 00:14:59.644 "base_bdevs_list": [ 00:14:59.644 { 00:14:59.644 "name": "BaseBdev1", 00:14:59.644 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:59.644 "is_configured": false, 00:14:59.644 "data_offset": 0, 00:14:59.644 "data_size": 0 00:14:59.644 }, 00:14:59.644 { 00:14:59.644 "name": "BaseBdev2", 00:14:59.644 "uuid": "840761bc-7ee1-4352-b8e9-7fed0e12548a", 00:14:59.644 "is_configured": true, 00:14:59.644 "data_offset": 0, 00:14:59.644 "data_size": 65536 00:14:59.644 }, 00:14:59.644 { 00:14:59.644 "name": "BaseBdev3", 00:14:59.644 "uuid": "68561f6e-1142-4cc5-a987-723157c739b8", 00:14:59.644 "is_configured": true, 00:14:59.644 "data_offset": 0, 00:14:59.644 "data_size": 65536 00:14:59.644 } 00:14:59.644 ] 00:14:59.644 }' 00:14:59.644 13:34:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:59.644 13:34:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:59.901 13:34:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:14:59.901 13:34:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.901 13:34:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:59.901 [2024-11-20 13:34:59.365618] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:59.901 13:34:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.901 13:34:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:14:59.901 13:34:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:59.901 13:34:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:59.901 13:34:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:59.901 13:34:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:59.901 13:34:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:59.901 13:34:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:59.901 13:34:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:59.901 13:34:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:59.901 13:34:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:59.901 13:34:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:59.901 13:34:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:59.901 13:34:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.901 13:34:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.189 13:34:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.189 13:34:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:00.189 "name": "Existed_Raid", 00:15:00.189 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:00.189 "strip_size_kb": 0, 00:15:00.189 "state": "configuring", 00:15:00.189 "raid_level": "raid1", 00:15:00.189 "superblock": false, 00:15:00.189 "num_base_bdevs": 3, 00:15:00.189 "num_base_bdevs_discovered": 1, 00:15:00.189 "num_base_bdevs_operational": 3, 00:15:00.189 "base_bdevs_list": [ 00:15:00.189 { 00:15:00.189 "name": "BaseBdev1", 00:15:00.189 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:00.189 "is_configured": false, 00:15:00.189 "data_offset": 0, 00:15:00.189 "data_size": 0 00:15:00.189 }, 00:15:00.189 { 00:15:00.189 "name": null, 00:15:00.189 "uuid": "840761bc-7ee1-4352-b8e9-7fed0e12548a", 00:15:00.189 "is_configured": false, 00:15:00.189 "data_offset": 0, 00:15:00.189 "data_size": 65536 00:15:00.189 }, 00:15:00.189 { 00:15:00.189 "name": "BaseBdev3", 00:15:00.189 "uuid": "68561f6e-1142-4cc5-a987-723157c739b8", 00:15:00.189 "is_configured": true, 00:15:00.189 "data_offset": 0, 00:15:00.189 "data_size": 65536 00:15:00.189 } 00:15:00.189 ] 00:15:00.189 }' 00:15:00.189 13:34:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:00.189 13:34:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.448 13:34:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:00.448 13:34:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:00.448 13:34:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.448 13:34:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.448 13:34:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.448 13:34:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:15:00.448 13:34:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:00.448 13:34:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.448 13:34:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.448 [2024-11-20 13:34:59.891790] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:00.448 BaseBdev1 00:15:00.448 13:34:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.448 13:34:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:15:00.448 13:34:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:15:00.448 13:34:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:00.448 13:34:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:00.448 13:34:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:00.448 13:34:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:00.448 13:34:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:00.448 13:34:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.448 13:34:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.448 13:34:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.448 13:34:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:00.448 13:34:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.448 13:34:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.448 [ 00:15:00.448 { 00:15:00.448 "name": "BaseBdev1", 00:15:00.448 "aliases": [ 00:15:00.448 "13446849-fed9-46d7-9b30-803a1db8d452" 00:15:00.448 ], 00:15:00.448 "product_name": "Malloc disk", 00:15:00.448 "block_size": 512, 00:15:00.448 "num_blocks": 65536, 00:15:00.448 "uuid": "13446849-fed9-46d7-9b30-803a1db8d452", 00:15:00.448 "assigned_rate_limits": { 00:15:00.448 "rw_ios_per_sec": 0, 00:15:00.448 "rw_mbytes_per_sec": 0, 00:15:00.448 "r_mbytes_per_sec": 0, 00:15:00.448 "w_mbytes_per_sec": 0 00:15:00.448 }, 00:15:00.448 "claimed": true, 00:15:00.448 "claim_type": "exclusive_write", 00:15:00.449 "zoned": false, 00:15:00.449 "supported_io_types": { 00:15:00.449 "read": true, 00:15:00.449 "write": true, 00:15:00.449 "unmap": true, 00:15:00.449 "flush": true, 00:15:00.449 "reset": true, 00:15:00.449 "nvme_admin": false, 00:15:00.449 "nvme_io": false, 00:15:00.449 "nvme_io_md": false, 00:15:00.449 "write_zeroes": true, 00:15:00.449 "zcopy": true, 00:15:00.449 "get_zone_info": false, 00:15:00.449 "zone_management": false, 00:15:00.707 "zone_append": false, 00:15:00.707 "compare": false, 00:15:00.707 "compare_and_write": false, 00:15:00.707 "abort": true, 00:15:00.707 "seek_hole": false, 00:15:00.707 "seek_data": false, 00:15:00.707 "copy": true, 00:15:00.707 "nvme_iov_md": false 00:15:00.707 }, 00:15:00.707 "memory_domains": [ 00:15:00.707 { 00:15:00.707 "dma_device_id": "system", 00:15:00.707 "dma_device_type": 1 00:15:00.707 }, 00:15:00.707 { 00:15:00.707 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:00.707 "dma_device_type": 2 00:15:00.707 } 00:15:00.707 ], 00:15:00.707 "driver_specific": {} 00:15:00.707 } 00:15:00.707 ] 00:15:00.707 13:34:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.707 13:34:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:00.707 13:34:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:15:00.707 13:34:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:00.707 13:34:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:00.707 13:34:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:00.707 13:34:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:00.707 13:34:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:00.707 13:34:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:00.707 13:34:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:00.707 13:34:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:00.707 13:34:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:00.707 13:34:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:00.707 13:34:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.707 13:34:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.707 13:34:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:00.707 13:34:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.707 13:34:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:00.707 "name": "Existed_Raid", 00:15:00.707 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:00.707 "strip_size_kb": 0, 00:15:00.707 "state": "configuring", 00:15:00.707 "raid_level": "raid1", 00:15:00.707 "superblock": false, 00:15:00.707 "num_base_bdevs": 3, 00:15:00.707 "num_base_bdevs_discovered": 2, 00:15:00.707 "num_base_bdevs_operational": 3, 00:15:00.707 "base_bdevs_list": [ 00:15:00.707 { 00:15:00.707 "name": "BaseBdev1", 00:15:00.707 "uuid": "13446849-fed9-46d7-9b30-803a1db8d452", 00:15:00.707 "is_configured": true, 00:15:00.707 "data_offset": 0, 00:15:00.707 "data_size": 65536 00:15:00.707 }, 00:15:00.707 { 00:15:00.707 "name": null, 00:15:00.707 "uuid": "840761bc-7ee1-4352-b8e9-7fed0e12548a", 00:15:00.707 "is_configured": false, 00:15:00.707 "data_offset": 0, 00:15:00.707 "data_size": 65536 00:15:00.707 }, 00:15:00.707 { 00:15:00.707 "name": "BaseBdev3", 00:15:00.707 "uuid": "68561f6e-1142-4cc5-a987-723157c739b8", 00:15:00.707 "is_configured": true, 00:15:00.707 "data_offset": 0, 00:15:00.707 "data_size": 65536 00:15:00.707 } 00:15:00.707 ] 00:15:00.707 }' 00:15:00.707 13:34:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:00.707 13:34:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.966 13:35:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:00.966 13:35:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:00.966 13:35:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.966 13:35:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.966 13:35:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.966 13:35:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:15:00.966 13:35:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:15:00.966 13:35:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.966 13:35:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.966 [2024-11-20 13:35:00.423110] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:00.966 13:35:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.966 13:35:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:15:00.966 13:35:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:00.966 13:35:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:00.966 13:35:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:00.966 13:35:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:00.966 13:35:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:00.966 13:35:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:00.966 13:35:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:00.966 13:35:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:00.966 13:35:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:00.966 13:35:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:00.966 13:35:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:00.966 13:35:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.966 13:35:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:01.223 13:35:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.223 13:35:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:01.223 "name": "Existed_Raid", 00:15:01.223 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:01.223 "strip_size_kb": 0, 00:15:01.223 "state": "configuring", 00:15:01.223 "raid_level": "raid1", 00:15:01.223 "superblock": false, 00:15:01.223 "num_base_bdevs": 3, 00:15:01.224 "num_base_bdevs_discovered": 1, 00:15:01.224 "num_base_bdevs_operational": 3, 00:15:01.224 "base_bdevs_list": [ 00:15:01.224 { 00:15:01.224 "name": "BaseBdev1", 00:15:01.224 "uuid": "13446849-fed9-46d7-9b30-803a1db8d452", 00:15:01.224 "is_configured": true, 00:15:01.224 "data_offset": 0, 00:15:01.224 "data_size": 65536 00:15:01.224 }, 00:15:01.224 { 00:15:01.224 "name": null, 00:15:01.224 "uuid": "840761bc-7ee1-4352-b8e9-7fed0e12548a", 00:15:01.224 "is_configured": false, 00:15:01.224 "data_offset": 0, 00:15:01.224 "data_size": 65536 00:15:01.224 }, 00:15:01.224 { 00:15:01.224 "name": null, 00:15:01.224 "uuid": "68561f6e-1142-4cc5-a987-723157c739b8", 00:15:01.224 "is_configured": false, 00:15:01.224 "data_offset": 0, 00:15:01.224 "data_size": 65536 00:15:01.224 } 00:15:01.224 ] 00:15:01.224 }' 00:15:01.224 13:35:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:01.224 13:35:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:01.483 13:35:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:01.483 13:35:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.483 13:35:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:01.483 13:35:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:01.483 13:35:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.483 13:35:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:15:01.483 13:35:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:15:01.483 13:35:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.483 13:35:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:01.483 [2024-11-20 13:35:00.878691] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:01.483 13:35:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.483 13:35:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:15:01.483 13:35:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:01.483 13:35:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:01.483 13:35:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:01.483 13:35:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:01.483 13:35:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:01.483 13:35:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:01.483 13:35:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:01.483 13:35:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:01.483 13:35:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:01.483 13:35:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:01.483 13:35:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.483 13:35:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:01.483 13:35:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:01.483 13:35:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.483 13:35:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:01.483 "name": "Existed_Raid", 00:15:01.483 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:01.483 "strip_size_kb": 0, 00:15:01.483 "state": "configuring", 00:15:01.483 "raid_level": "raid1", 00:15:01.483 "superblock": false, 00:15:01.483 "num_base_bdevs": 3, 00:15:01.483 "num_base_bdevs_discovered": 2, 00:15:01.483 "num_base_bdevs_operational": 3, 00:15:01.483 "base_bdevs_list": [ 00:15:01.483 { 00:15:01.483 "name": "BaseBdev1", 00:15:01.483 "uuid": "13446849-fed9-46d7-9b30-803a1db8d452", 00:15:01.483 "is_configured": true, 00:15:01.483 "data_offset": 0, 00:15:01.483 "data_size": 65536 00:15:01.483 }, 00:15:01.483 { 00:15:01.483 "name": null, 00:15:01.483 "uuid": "840761bc-7ee1-4352-b8e9-7fed0e12548a", 00:15:01.483 "is_configured": false, 00:15:01.483 "data_offset": 0, 00:15:01.483 "data_size": 65536 00:15:01.483 }, 00:15:01.483 { 00:15:01.483 "name": "BaseBdev3", 00:15:01.483 "uuid": "68561f6e-1142-4cc5-a987-723157c739b8", 00:15:01.483 "is_configured": true, 00:15:01.483 "data_offset": 0, 00:15:01.483 "data_size": 65536 00:15:01.483 } 00:15:01.483 ] 00:15:01.483 }' 00:15:01.483 13:35:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:01.483 13:35:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.051 13:35:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:02.051 13:35:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.051 13:35:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:02.051 13:35:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.051 13:35:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.051 13:35:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:15:02.051 13:35:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:02.051 13:35:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.051 13:35:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.051 [2024-11-20 13:35:01.386285] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:02.051 13:35:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.051 13:35:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:15:02.051 13:35:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:02.051 13:35:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:02.051 13:35:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:02.051 13:35:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:02.051 13:35:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:02.051 13:35:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:02.051 13:35:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:02.051 13:35:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:02.051 13:35:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:02.051 13:35:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:02.051 13:35:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.051 13:35:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.051 13:35:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:02.051 13:35:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.051 13:35:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:02.051 "name": "Existed_Raid", 00:15:02.051 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:02.051 "strip_size_kb": 0, 00:15:02.051 "state": "configuring", 00:15:02.051 "raid_level": "raid1", 00:15:02.051 "superblock": false, 00:15:02.051 "num_base_bdevs": 3, 00:15:02.051 "num_base_bdevs_discovered": 1, 00:15:02.051 "num_base_bdevs_operational": 3, 00:15:02.051 "base_bdevs_list": [ 00:15:02.051 { 00:15:02.051 "name": null, 00:15:02.051 "uuid": "13446849-fed9-46d7-9b30-803a1db8d452", 00:15:02.051 "is_configured": false, 00:15:02.051 "data_offset": 0, 00:15:02.051 "data_size": 65536 00:15:02.051 }, 00:15:02.051 { 00:15:02.051 "name": null, 00:15:02.051 "uuid": "840761bc-7ee1-4352-b8e9-7fed0e12548a", 00:15:02.051 "is_configured": false, 00:15:02.051 "data_offset": 0, 00:15:02.051 "data_size": 65536 00:15:02.051 }, 00:15:02.051 { 00:15:02.051 "name": "BaseBdev3", 00:15:02.051 "uuid": "68561f6e-1142-4cc5-a987-723157c739b8", 00:15:02.051 "is_configured": true, 00:15:02.051 "data_offset": 0, 00:15:02.051 "data_size": 65536 00:15:02.051 } 00:15:02.051 ] 00:15:02.051 }' 00:15:02.051 13:35:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:02.051 13:35:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.618 13:35:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:02.618 13:35:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:02.618 13:35:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.618 13:35:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.618 13:35:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.618 13:35:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:15:02.618 13:35:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:15:02.618 13:35:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.618 13:35:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.618 [2024-11-20 13:35:01.954783] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:02.618 13:35:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.618 13:35:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:15:02.618 13:35:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:02.618 13:35:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:02.618 13:35:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:02.618 13:35:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:02.618 13:35:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:02.618 13:35:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:02.618 13:35:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:02.618 13:35:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:02.618 13:35:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:02.618 13:35:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:02.618 13:35:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:02.618 13:35:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.618 13:35:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.618 13:35:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.618 13:35:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:02.618 "name": "Existed_Raid", 00:15:02.618 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:02.618 "strip_size_kb": 0, 00:15:02.618 "state": "configuring", 00:15:02.618 "raid_level": "raid1", 00:15:02.618 "superblock": false, 00:15:02.618 "num_base_bdevs": 3, 00:15:02.618 "num_base_bdevs_discovered": 2, 00:15:02.618 "num_base_bdevs_operational": 3, 00:15:02.618 "base_bdevs_list": [ 00:15:02.618 { 00:15:02.618 "name": null, 00:15:02.618 "uuid": "13446849-fed9-46d7-9b30-803a1db8d452", 00:15:02.618 "is_configured": false, 00:15:02.618 "data_offset": 0, 00:15:02.618 "data_size": 65536 00:15:02.618 }, 00:15:02.618 { 00:15:02.618 "name": "BaseBdev2", 00:15:02.618 "uuid": "840761bc-7ee1-4352-b8e9-7fed0e12548a", 00:15:02.618 "is_configured": true, 00:15:02.618 "data_offset": 0, 00:15:02.618 "data_size": 65536 00:15:02.618 }, 00:15:02.618 { 00:15:02.618 "name": "BaseBdev3", 00:15:02.618 "uuid": "68561f6e-1142-4cc5-a987-723157c739b8", 00:15:02.618 "is_configured": true, 00:15:02.618 "data_offset": 0, 00:15:02.618 "data_size": 65536 00:15:02.618 } 00:15:02.618 ] 00:15:02.618 }' 00:15:02.618 13:35:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:02.618 13:35:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:03.185 13:35:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:03.185 13:35:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:03.185 13:35:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.185 13:35:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:03.185 13:35:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.185 13:35:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:15:03.185 13:35:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:03.185 13:35:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:15:03.186 13:35:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.186 13:35:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:03.186 13:35:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.186 13:35:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 13446849-fed9-46d7-9b30-803a1db8d452 00:15:03.186 13:35:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.186 13:35:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:03.186 [2024-11-20 13:35:02.541611] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:15:03.186 [2024-11-20 13:35:02.541661] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:15:03.186 [2024-11-20 13:35:02.541671] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:15:03.186 [2024-11-20 13:35:02.541943] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:15:03.186 [2024-11-20 13:35:02.542126] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:15:03.186 [2024-11-20 13:35:02.542142] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:15:03.186 [2024-11-20 13:35:02.542424] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:03.186 NewBaseBdev 00:15:03.186 13:35:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.186 13:35:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:15:03.186 13:35:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:15:03.186 13:35:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:03.186 13:35:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:03.186 13:35:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:03.186 13:35:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:03.186 13:35:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:03.186 13:35:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.186 13:35:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:03.186 13:35:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.186 13:35:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:15:03.186 13:35:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.186 13:35:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:03.186 [ 00:15:03.186 { 00:15:03.186 "name": "NewBaseBdev", 00:15:03.186 "aliases": [ 00:15:03.186 "13446849-fed9-46d7-9b30-803a1db8d452" 00:15:03.186 ], 00:15:03.186 "product_name": "Malloc disk", 00:15:03.186 "block_size": 512, 00:15:03.186 "num_blocks": 65536, 00:15:03.186 "uuid": "13446849-fed9-46d7-9b30-803a1db8d452", 00:15:03.186 "assigned_rate_limits": { 00:15:03.186 "rw_ios_per_sec": 0, 00:15:03.186 "rw_mbytes_per_sec": 0, 00:15:03.186 "r_mbytes_per_sec": 0, 00:15:03.186 "w_mbytes_per_sec": 0 00:15:03.186 }, 00:15:03.186 "claimed": true, 00:15:03.186 "claim_type": "exclusive_write", 00:15:03.186 "zoned": false, 00:15:03.186 "supported_io_types": { 00:15:03.186 "read": true, 00:15:03.186 "write": true, 00:15:03.186 "unmap": true, 00:15:03.186 "flush": true, 00:15:03.186 "reset": true, 00:15:03.186 "nvme_admin": false, 00:15:03.186 "nvme_io": false, 00:15:03.186 "nvme_io_md": false, 00:15:03.186 "write_zeroes": true, 00:15:03.186 "zcopy": true, 00:15:03.186 "get_zone_info": false, 00:15:03.186 "zone_management": false, 00:15:03.186 "zone_append": false, 00:15:03.186 "compare": false, 00:15:03.186 "compare_and_write": false, 00:15:03.186 "abort": true, 00:15:03.186 "seek_hole": false, 00:15:03.186 "seek_data": false, 00:15:03.186 "copy": true, 00:15:03.186 "nvme_iov_md": false 00:15:03.186 }, 00:15:03.186 "memory_domains": [ 00:15:03.186 { 00:15:03.186 "dma_device_id": "system", 00:15:03.186 "dma_device_type": 1 00:15:03.186 }, 00:15:03.186 { 00:15:03.186 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:03.186 "dma_device_type": 2 00:15:03.186 } 00:15:03.186 ], 00:15:03.186 "driver_specific": {} 00:15:03.186 } 00:15:03.186 ] 00:15:03.186 13:35:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.186 13:35:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:03.186 13:35:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:15:03.186 13:35:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:03.186 13:35:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:03.186 13:35:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:03.186 13:35:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:03.186 13:35:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:03.186 13:35:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:03.186 13:35:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:03.186 13:35:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:03.186 13:35:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:03.186 13:35:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:03.186 13:35:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:03.186 13:35:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.186 13:35:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:03.186 13:35:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.186 13:35:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:03.186 "name": "Existed_Raid", 00:15:03.186 "uuid": "a616f6bd-0b5d-44de-a0d0-553115cc1ba4", 00:15:03.186 "strip_size_kb": 0, 00:15:03.186 "state": "online", 00:15:03.186 "raid_level": "raid1", 00:15:03.186 "superblock": false, 00:15:03.186 "num_base_bdevs": 3, 00:15:03.186 "num_base_bdevs_discovered": 3, 00:15:03.186 "num_base_bdevs_operational": 3, 00:15:03.186 "base_bdevs_list": [ 00:15:03.186 { 00:15:03.186 "name": "NewBaseBdev", 00:15:03.186 "uuid": "13446849-fed9-46d7-9b30-803a1db8d452", 00:15:03.186 "is_configured": true, 00:15:03.186 "data_offset": 0, 00:15:03.186 "data_size": 65536 00:15:03.186 }, 00:15:03.186 { 00:15:03.186 "name": "BaseBdev2", 00:15:03.186 "uuid": "840761bc-7ee1-4352-b8e9-7fed0e12548a", 00:15:03.186 "is_configured": true, 00:15:03.186 "data_offset": 0, 00:15:03.186 "data_size": 65536 00:15:03.186 }, 00:15:03.186 { 00:15:03.186 "name": "BaseBdev3", 00:15:03.186 "uuid": "68561f6e-1142-4cc5-a987-723157c739b8", 00:15:03.186 "is_configured": true, 00:15:03.186 "data_offset": 0, 00:15:03.186 "data_size": 65536 00:15:03.186 } 00:15:03.186 ] 00:15:03.186 }' 00:15:03.186 13:35:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:03.186 13:35:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:03.752 13:35:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:15:03.752 13:35:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:03.752 13:35:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:03.752 13:35:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:03.752 13:35:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:03.752 13:35:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:03.752 13:35:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:03.753 13:35:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.753 13:35:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:03.753 13:35:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:03.753 [2024-11-20 13:35:03.013479] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:03.753 13:35:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.753 13:35:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:03.753 "name": "Existed_Raid", 00:15:03.753 "aliases": [ 00:15:03.753 "a616f6bd-0b5d-44de-a0d0-553115cc1ba4" 00:15:03.753 ], 00:15:03.753 "product_name": "Raid Volume", 00:15:03.753 "block_size": 512, 00:15:03.753 "num_blocks": 65536, 00:15:03.753 "uuid": "a616f6bd-0b5d-44de-a0d0-553115cc1ba4", 00:15:03.753 "assigned_rate_limits": { 00:15:03.753 "rw_ios_per_sec": 0, 00:15:03.753 "rw_mbytes_per_sec": 0, 00:15:03.753 "r_mbytes_per_sec": 0, 00:15:03.753 "w_mbytes_per_sec": 0 00:15:03.753 }, 00:15:03.753 "claimed": false, 00:15:03.753 "zoned": false, 00:15:03.753 "supported_io_types": { 00:15:03.753 "read": true, 00:15:03.753 "write": true, 00:15:03.753 "unmap": false, 00:15:03.753 "flush": false, 00:15:03.753 "reset": true, 00:15:03.753 "nvme_admin": false, 00:15:03.753 "nvme_io": false, 00:15:03.753 "nvme_io_md": false, 00:15:03.753 "write_zeroes": true, 00:15:03.753 "zcopy": false, 00:15:03.753 "get_zone_info": false, 00:15:03.753 "zone_management": false, 00:15:03.753 "zone_append": false, 00:15:03.753 "compare": false, 00:15:03.753 "compare_and_write": false, 00:15:03.753 "abort": false, 00:15:03.753 "seek_hole": false, 00:15:03.753 "seek_data": false, 00:15:03.753 "copy": false, 00:15:03.753 "nvme_iov_md": false 00:15:03.753 }, 00:15:03.753 "memory_domains": [ 00:15:03.753 { 00:15:03.753 "dma_device_id": "system", 00:15:03.753 "dma_device_type": 1 00:15:03.753 }, 00:15:03.753 { 00:15:03.753 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:03.753 "dma_device_type": 2 00:15:03.753 }, 00:15:03.753 { 00:15:03.753 "dma_device_id": "system", 00:15:03.753 "dma_device_type": 1 00:15:03.753 }, 00:15:03.753 { 00:15:03.753 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:03.753 "dma_device_type": 2 00:15:03.753 }, 00:15:03.753 { 00:15:03.753 "dma_device_id": "system", 00:15:03.753 "dma_device_type": 1 00:15:03.753 }, 00:15:03.753 { 00:15:03.753 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:03.753 "dma_device_type": 2 00:15:03.753 } 00:15:03.753 ], 00:15:03.753 "driver_specific": { 00:15:03.753 "raid": { 00:15:03.753 "uuid": "a616f6bd-0b5d-44de-a0d0-553115cc1ba4", 00:15:03.753 "strip_size_kb": 0, 00:15:03.753 "state": "online", 00:15:03.753 "raid_level": "raid1", 00:15:03.753 "superblock": false, 00:15:03.753 "num_base_bdevs": 3, 00:15:03.753 "num_base_bdevs_discovered": 3, 00:15:03.753 "num_base_bdevs_operational": 3, 00:15:03.753 "base_bdevs_list": [ 00:15:03.753 { 00:15:03.753 "name": "NewBaseBdev", 00:15:03.753 "uuid": "13446849-fed9-46d7-9b30-803a1db8d452", 00:15:03.753 "is_configured": true, 00:15:03.753 "data_offset": 0, 00:15:03.753 "data_size": 65536 00:15:03.753 }, 00:15:03.753 { 00:15:03.753 "name": "BaseBdev2", 00:15:03.753 "uuid": "840761bc-7ee1-4352-b8e9-7fed0e12548a", 00:15:03.753 "is_configured": true, 00:15:03.753 "data_offset": 0, 00:15:03.753 "data_size": 65536 00:15:03.753 }, 00:15:03.753 { 00:15:03.753 "name": "BaseBdev3", 00:15:03.753 "uuid": "68561f6e-1142-4cc5-a987-723157c739b8", 00:15:03.753 "is_configured": true, 00:15:03.753 "data_offset": 0, 00:15:03.753 "data_size": 65536 00:15:03.753 } 00:15:03.753 ] 00:15:03.753 } 00:15:03.753 } 00:15:03.753 }' 00:15:03.753 13:35:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:03.753 13:35:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:15:03.753 BaseBdev2 00:15:03.753 BaseBdev3' 00:15:03.753 13:35:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:03.753 13:35:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:03.753 13:35:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:03.753 13:35:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:03.753 13:35:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:15:03.753 13:35:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.753 13:35:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:03.753 13:35:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.753 13:35:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:03.753 13:35:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:03.753 13:35:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:03.753 13:35:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:03.753 13:35:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:03.753 13:35:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.753 13:35:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:04.012 13:35:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.012 13:35:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:04.012 13:35:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:04.012 13:35:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:04.012 13:35:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:04.012 13:35:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:04.012 13:35:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.012 13:35:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:04.012 13:35:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.012 13:35:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:04.012 13:35:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:04.012 13:35:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:04.012 13:35:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.012 13:35:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:04.012 [2024-11-20 13:35:03.296777] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:04.012 [2024-11-20 13:35:03.296939] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:04.012 [2024-11-20 13:35:03.297055] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:04.012 [2024-11-20 13:35:03.297373] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:04.012 [2024-11-20 13:35:03.297390] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:15:04.012 13:35:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.012 13:35:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 67160 00:15:04.012 13:35:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 67160 ']' 00:15:04.012 13:35:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 67160 00:15:04.012 13:35:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:15:04.012 13:35:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:04.012 13:35:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67160 00:15:04.012 killing process with pid 67160 00:15:04.012 13:35:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:04.012 13:35:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:04.012 13:35:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67160' 00:15:04.012 13:35:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 67160 00:15:04.012 [2024-11-20 13:35:03.352595] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:04.012 13:35:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 67160 00:15:04.271 [2024-11-20 13:35:03.658221] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:05.647 13:35:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:15:05.647 00:15:05.647 real 0m10.846s 00:15:05.647 user 0m17.311s 00:15:05.647 sys 0m2.005s 00:15:05.647 13:35:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:05.647 ************************************ 00:15:05.647 END TEST raid_state_function_test 00:15:05.647 ************************************ 00:15:05.647 13:35:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:05.647 13:35:04 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 3 true 00:15:05.647 13:35:04 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:15:05.647 13:35:04 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:05.647 13:35:04 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:05.647 ************************************ 00:15:05.647 START TEST raid_state_function_test_sb 00:15:05.647 ************************************ 00:15:05.647 13:35:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 3 true 00:15:05.647 13:35:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:15:05.647 13:35:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:15:05.647 13:35:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:15:05.647 13:35:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:15:05.647 13:35:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:15:05.647 13:35:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:05.647 13:35:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:15:05.647 13:35:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:05.647 13:35:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:05.647 13:35:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:15:05.647 13:35:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:05.647 13:35:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:05.647 13:35:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:15:05.647 13:35:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:05.647 13:35:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:05.647 13:35:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:15:05.647 13:35:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:15:05.647 13:35:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:15:05.647 13:35:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:15:05.647 13:35:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:15:05.647 13:35:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:15:05.647 13:35:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:15:05.647 13:35:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:15:05.647 13:35:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:15:05.647 13:35:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:15:05.647 13:35:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=67783 00:15:05.647 Process raid pid: 67783 00:15:05.647 13:35:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:15:05.647 13:35:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 67783' 00:15:05.647 13:35:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 67783 00:15:05.647 13:35:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 67783 ']' 00:15:05.647 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:05.647 13:35:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:05.647 13:35:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:05.647 13:35:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:05.647 13:35:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:05.647 13:35:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:05.647 [2024-11-20 13:35:04.990174] Starting SPDK v25.01-pre git sha1 82b85d9ca / DPDK 24.03.0 initialization... 00:15:05.648 [2024-11-20 13:35:04.990310] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:05.906 [2024-11-20 13:35:05.176158] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:05.906 [2024-11-20 13:35:05.300350] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:06.164 [2024-11-20 13:35:05.527303] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:06.164 [2024-11-20 13:35:05.527351] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:06.422 13:35:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:06.422 13:35:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:15:06.422 13:35:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:06.422 13:35:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.422 13:35:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:06.422 [2024-11-20 13:35:05.858519] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:06.422 [2024-11-20 13:35:05.858591] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:06.422 [2024-11-20 13:35:05.858630] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:06.422 [2024-11-20 13:35:05.858647] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:06.422 [2024-11-20 13:35:05.858658] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:06.422 [2024-11-20 13:35:05.858673] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:06.422 13:35:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.422 13:35:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:15:06.422 13:35:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:06.422 13:35:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:06.422 13:35:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:06.422 13:35:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:06.422 13:35:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:06.422 13:35:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:06.422 13:35:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:06.422 13:35:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:06.422 13:35:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:06.422 13:35:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:06.422 13:35:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:06.422 13:35:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.422 13:35:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:06.422 13:35:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.680 13:35:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:06.680 "name": "Existed_Raid", 00:15:06.680 "uuid": "6836236a-1ec6-449d-9ca5-f22d64b7cad7", 00:15:06.680 "strip_size_kb": 0, 00:15:06.680 "state": "configuring", 00:15:06.680 "raid_level": "raid1", 00:15:06.680 "superblock": true, 00:15:06.680 "num_base_bdevs": 3, 00:15:06.680 "num_base_bdevs_discovered": 0, 00:15:06.680 "num_base_bdevs_operational": 3, 00:15:06.680 "base_bdevs_list": [ 00:15:06.680 { 00:15:06.680 "name": "BaseBdev1", 00:15:06.680 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:06.680 "is_configured": false, 00:15:06.680 "data_offset": 0, 00:15:06.680 "data_size": 0 00:15:06.680 }, 00:15:06.680 { 00:15:06.680 "name": "BaseBdev2", 00:15:06.680 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:06.680 "is_configured": false, 00:15:06.680 "data_offset": 0, 00:15:06.680 "data_size": 0 00:15:06.680 }, 00:15:06.680 { 00:15:06.680 "name": "BaseBdev3", 00:15:06.680 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:06.680 "is_configured": false, 00:15:06.680 "data_offset": 0, 00:15:06.680 "data_size": 0 00:15:06.680 } 00:15:06.680 ] 00:15:06.681 }' 00:15:06.681 13:35:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:06.681 13:35:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:06.939 13:35:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:06.939 13:35:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.939 13:35:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:06.939 [2024-11-20 13:35:06.262227] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:06.939 [2024-11-20 13:35:06.262452] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:15:06.939 13:35:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.939 13:35:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:06.939 13:35:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.939 13:35:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:06.939 [2024-11-20 13:35:06.270229] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:06.939 [2024-11-20 13:35:06.270288] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:06.939 [2024-11-20 13:35:06.270311] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:06.939 [2024-11-20 13:35:06.270327] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:06.939 [2024-11-20 13:35:06.270338] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:06.939 [2024-11-20 13:35:06.270353] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:06.939 13:35:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.939 13:35:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:06.939 13:35:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.939 13:35:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:06.939 [2024-11-20 13:35:06.317715] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:06.939 BaseBdev1 00:15:06.939 13:35:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.939 13:35:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:15:06.939 13:35:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:15:06.939 13:35:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:06.939 13:35:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:06.939 13:35:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:06.939 13:35:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:06.939 13:35:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:06.939 13:35:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.939 13:35:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:06.939 13:35:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.939 13:35:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:06.939 13:35:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.939 13:35:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:06.939 [ 00:15:06.939 { 00:15:06.939 "name": "BaseBdev1", 00:15:06.939 "aliases": [ 00:15:06.939 "2d398a91-7aed-48c7-ad6a-ad9a7ddce444" 00:15:06.939 ], 00:15:06.939 "product_name": "Malloc disk", 00:15:06.939 "block_size": 512, 00:15:06.939 "num_blocks": 65536, 00:15:06.939 "uuid": "2d398a91-7aed-48c7-ad6a-ad9a7ddce444", 00:15:06.939 "assigned_rate_limits": { 00:15:06.939 "rw_ios_per_sec": 0, 00:15:06.939 "rw_mbytes_per_sec": 0, 00:15:06.939 "r_mbytes_per_sec": 0, 00:15:06.939 "w_mbytes_per_sec": 0 00:15:06.939 }, 00:15:06.939 "claimed": true, 00:15:06.939 "claim_type": "exclusive_write", 00:15:06.939 "zoned": false, 00:15:06.939 "supported_io_types": { 00:15:06.939 "read": true, 00:15:06.939 "write": true, 00:15:06.939 "unmap": true, 00:15:06.939 "flush": true, 00:15:06.939 "reset": true, 00:15:06.939 "nvme_admin": false, 00:15:06.939 "nvme_io": false, 00:15:06.939 "nvme_io_md": false, 00:15:06.939 "write_zeroes": true, 00:15:06.939 "zcopy": true, 00:15:06.939 "get_zone_info": false, 00:15:06.939 "zone_management": false, 00:15:06.939 "zone_append": false, 00:15:06.939 "compare": false, 00:15:06.939 "compare_and_write": false, 00:15:06.939 "abort": true, 00:15:06.939 "seek_hole": false, 00:15:06.939 "seek_data": false, 00:15:06.939 "copy": true, 00:15:06.939 "nvme_iov_md": false 00:15:06.939 }, 00:15:06.939 "memory_domains": [ 00:15:06.939 { 00:15:06.939 "dma_device_id": "system", 00:15:06.939 "dma_device_type": 1 00:15:06.939 }, 00:15:06.939 { 00:15:06.939 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:06.939 "dma_device_type": 2 00:15:06.939 } 00:15:06.939 ], 00:15:06.939 "driver_specific": {} 00:15:06.939 } 00:15:06.939 ] 00:15:06.939 13:35:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.939 13:35:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:06.939 13:35:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:15:06.939 13:35:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:06.939 13:35:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:06.939 13:35:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:06.939 13:35:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:06.939 13:35:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:06.940 13:35:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:06.940 13:35:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:06.940 13:35:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:06.940 13:35:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:06.940 13:35:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:06.940 13:35:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.940 13:35:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:06.940 13:35:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:06.940 13:35:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.940 13:35:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:06.940 "name": "Existed_Raid", 00:15:06.940 "uuid": "e2023f48-0cfa-49a1-8aac-5ece32e61642", 00:15:06.940 "strip_size_kb": 0, 00:15:06.940 "state": "configuring", 00:15:06.940 "raid_level": "raid1", 00:15:06.940 "superblock": true, 00:15:06.940 "num_base_bdevs": 3, 00:15:06.940 "num_base_bdevs_discovered": 1, 00:15:06.940 "num_base_bdevs_operational": 3, 00:15:06.940 "base_bdevs_list": [ 00:15:06.940 { 00:15:06.940 "name": "BaseBdev1", 00:15:06.940 "uuid": "2d398a91-7aed-48c7-ad6a-ad9a7ddce444", 00:15:06.940 "is_configured": true, 00:15:06.940 "data_offset": 2048, 00:15:06.940 "data_size": 63488 00:15:06.940 }, 00:15:06.940 { 00:15:06.940 "name": "BaseBdev2", 00:15:06.940 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:06.940 "is_configured": false, 00:15:06.940 "data_offset": 0, 00:15:06.940 "data_size": 0 00:15:06.940 }, 00:15:06.940 { 00:15:06.940 "name": "BaseBdev3", 00:15:06.940 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:06.940 "is_configured": false, 00:15:06.940 "data_offset": 0, 00:15:06.940 "data_size": 0 00:15:06.940 } 00:15:06.940 ] 00:15:06.940 }' 00:15:06.940 13:35:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:06.940 13:35:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:07.505 13:35:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:07.505 13:35:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.505 13:35:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:07.505 [2024-11-20 13:35:06.829119] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:07.505 [2024-11-20 13:35:06.829180] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:15:07.505 13:35:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.505 13:35:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:07.505 13:35:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.505 13:35:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:07.505 [2024-11-20 13:35:06.841144] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:07.505 [2024-11-20 13:35:06.843442] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:07.505 [2024-11-20 13:35:06.843618] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:07.505 [2024-11-20 13:35:06.843720] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:07.505 [2024-11-20 13:35:06.843769] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:07.505 13:35:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.505 13:35:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:15:07.505 13:35:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:07.505 13:35:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:15:07.505 13:35:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:07.505 13:35:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:07.505 13:35:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:07.505 13:35:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:07.505 13:35:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:07.505 13:35:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:07.505 13:35:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:07.505 13:35:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:07.505 13:35:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:07.505 13:35:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:07.505 13:35:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:07.505 13:35:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.505 13:35:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:07.505 13:35:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.505 13:35:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:07.505 "name": "Existed_Raid", 00:15:07.505 "uuid": "1a75bf5c-6cd1-46e2-9747-9d40acce9186", 00:15:07.505 "strip_size_kb": 0, 00:15:07.505 "state": "configuring", 00:15:07.505 "raid_level": "raid1", 00:15:07.505 "superblock": true, 00:15:07.505 "num_base_bdevs": 3, 00:15:07.505 "num_base_bdevs_discovered": 1, 00:15:07.505 "num_base_bdevs_operational": 3, 00:15:07.505 "base_bdevs_list": [ 00:15:07.505 { 00:15:07.505 "name": "BaseBdev1", 00:15:07.505 "uuid": "2d398a91-7aed-48c7-ad6a-ad9a7ddce444", 00:15:07.505 "is_configured": true, 00:15:07.505 "data_offset": 2048, 00:15:07.505 "data_size": 63488 00:15:07.505 }, 00:15:07.505 { 00:15:07.505 "name": "BaseBdev2", 00:15:07.505 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:07.505 "is_configured": false, 00:15:07.505 "data_offset": 0, 00:15:07.505 "data_size": 0 00:15:07.505 }, 00:15:07.505 { 00:15:07.505 "name": "BaseBdev3", 00:15:07.505 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:07.505 "is_configured": false, 00:15:07.505 "data_offset": 0, 00:15:07.505 "data_size": 0 00:15:07.505 } 00:15:07.505 ] 00:15:07.505 }' 00:15:07.505 13:35:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:07.505 13:35:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:08.071 13:35:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:08.071 13:35:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.072 13:35:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:08.072 [2024-11-20 13:35:07.295355] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:08.072 BaseBdev2 00:15:08.072 13:35:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.072 13:35:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:15:08.072 13:35:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:15:08.072 13:35:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:08.072 13:35:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:08.072 13:35:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:08.072 13:35:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:08.072 13:35:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:08.072 13:35:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.072 13:35:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:08.072 13:35:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.072 13:35:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:08.072 13:35:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.072 13:35:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:08.072 [ 00:15:08.072 { 00:15:08.072 "name": "BaseBdev2", 00:15:08.072 "aliases": [ 00:15:08.072 "7bcedefa-676b-4def-ace8-72ca234de2b1" 00:15:08.072 ], 00:15:08.072 "product_name": "Malloc disk", 00:15:08.072 "block_size": 512, 00:15:08.072 "num_blocks": 65536, 00:15:08.072 "uuid": "7bcedefa-676b-4def-ace8-72ca234de2b1", 00:15:08.072 "assigned_rate_limits": { 00:15:08.072 "rw_ios_per_sec": 0, 00:15:08.072 "rw_mbytes_per_sec": 0, 00:15:08.072 "r_mbytes_per_sec": 0, 00:15:08.072 "w_mbytes_per_sec": 0 00:15:08.072 }, 00:15:08.072 "claimed": true, 00:15:08.072 "claim_type": "exclusive_write", 00:15:08.072 "zoned": false, 00:15:08.072 "supported_io_types": { 00:15:08.072 "read": true, 00:15:08.072 "write": true, 00:15:08.072 "unmap": true, 00:15:08.072 "flush": true, 00:15:08.072 "reset": true, 00:15:08.072 "nvme_admin": false, 00:15:08.072 "nvme_io": false, 00:15:08.072 "nvme_io_md": false, 00:15:08.072 "write_zeroes": true, 00:15:08.072 "zcopy": true, 00:15:08.072 "get_zone_info": false, 00:15:08.072 "zone_management": false, 00:15:08.072 "zone_append": false, 00:15:08.072 "compare": false, 00:15:08.072 "compare_and_write": false, 00:15:08.072 "abort": true, 00:15:08.072 "seek_hole": false, 00:15:08.072 "seek_data": false, 00:15:08.072 "copy": true, 00:15:08.072 "nvme_iov_md": false 00:15:08.072 }, 00:15:08.072 "memory_domains": [ 00:15:08.072 { 00:15:08.072 "dma_device_id": "system", 00:15:08.072 "dma_device_type": 1 00:15:08.072 }, 00:15:08.072 { 00:15:08.072 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:08.072 "dma_device_type": 2 00:15:08.072 } 00:15:08.072 ], 00:15:08.072 "driver_specific": {} 00:15:08.072 } 00:15:08.072 ] 00:15:08.072 13:35:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.072 13:35:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:08.072 13:35:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:08.072 13:35:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:08.072 13:35:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:15:08.072 13:35:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:08.072 13:35:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:08.072 13:35:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:08.072 13:35:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:08.072 13:35:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:08.072 13:35:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:08.072 13:35:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:08.072 13:35:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:08.072 13:35:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:08.072 13:35:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:08.072 13:35:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.072 13:35:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:08.072 13:35:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:08.072 13:35:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.072 13:35:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:08.072 "name": "Existed_Raid", 00:15:08.072 "uuid": "1a75bf5c-6cd1-46e2-9747-9d40acce9186", 00:15:08.072 "strip_size_kb": 0, 00:15:08.072 "state": "configuring", 00:15:08.072 "raid_level": "raid1", 00:15:08.072 "superblock": true, 00:15:08.072 "num_base_bdevs": 3, 00:15:08.072 "num_base_bdevs_discovered": 2, 00:15:08.072 "num_base_bdevs_operational": 3, 00:15:08.072 "base_bdevs_list": [ 00:15:08.072 { 00:15:08.072 "name": "BaseBdev1", 00:15:08.072 "uuid": "2d398a91-7aed-48c7-ad6a-ad9a7ddce444", 00:15:08.072 "is_configured": true, 00:15:08.072 "data_offset": 2048, 00:15:08.072 "data_size": 63488 00:15:08.072 }, 00:15:08.072 { 00:15:08.072 "name": "BaseBdev2", 00:15:08.072 "uuid": "7bcedefa-676b-4def-ace8-72ca234de2b1", 00:15:08.072 "is_configured": true, 00:15:08.072 "data_offset": 2048, 00:15:08.072 "data_size": 63488 00:15:08.072 }, 00:15:08.072 { 00:15:08.072 "name": "BaseBdev3", 00:15:08.072 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:08.072 "is_configured": false, 00:15:08.072 "data_offset": 0, 00:15:08.072 "data_size": 0 00:15:08.072 } 00:15:08.072 ] 00:15:08.072 }' 00:15:08.072 13:35:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:08.072 13:35:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:08.330 13:35:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:08.330 13:35:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.330 13:35:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:08.330 [2024-11-20 13:35:07.812409] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:08.330 [2024-11-20 13:35:07.812971] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:15:08.330 [2024-11-20 13:35:07.813003] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:08.330 [2024-11-20 13:35:07.813335] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:15:08.330 [2024-11-20 13:35:07.813499] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:15:08.330 [2024-11-20 13:35:07.813510] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:15:08.330 [2024-11-20 13:35:07.813667] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:08.330 BaseBdev3 00:15:08.331 13:35:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.589 13:35:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:15:08.589 13:35:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:15:08.589 13:35:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:08.589 13:35:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:08.589 13:35:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:08.589 13:35:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:08.589 13:35:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:08.589 13:35:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.589 13:35:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:08.589 13:35:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.589 13:35:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:08.589 13:35:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.589 13:35:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:08.589 [ 00:15:08.589 { 00:15:08.589 "name": "BaseBdev3", 00:15:08.589 "aliases": [ 00:15:08.589 "7926c2cc-cbd3-49d1-ba92-1443d7921cf9" 00:15:08.589 ], 00:15:08.589 "product_name": "Malloc disk", 00:15:08.589 "block_size": 512, 00:15:08.589 "num_blocks": 65536, 00:15:08.589 "uuid": "7926c2cc-cbd3-49d1-ba92-1443d7921cf9", 00:15:08.589 "assigned_rate_limits": { 00:15:08.589 "rw_ios_per_sec": 0, 00:15:08.589 "rw_mbytes_per_sec": 0, 00:15:08.589 "r_mbytes_per_sec": 0, 00:15:08.589 "w_mbytes_per_sec": 0 00:15:08.589 }, 00:15:08.589 "claimed": true, 00:15:08.589 "claim_type": "exclusive_write", 00:15:08.589 "zoned": false, 00:15:08.589 "supported_io_types": { 00:15:08.589 "read": true, 00:15:08.589 "write": true, 00:15:08.589 "unmap": true, 00:15:08.590 "flush": true, 00:15:08.590 "reset": true, 00:15:08.590 "nvme_admin": false, 00:15:08.590 "nvme_io": false, 00:15:08.590 "nvme_io_md": false, 00:15:08.590 "write_zeroes": true, 00:15:08.590 "zcopy": true, 00:15:08.590 "get_zone_info": false, 00:15:08.590 "zone_management": false, 00:15:08.590 "zone_append": false, 00:15:08.590 "compare": false, 00:15:08.590 "compare_and_write": false, 00:15:08.590 "abort": true, 00:15:08.590 "seek_hole": false, 00:15:08.590 "seek_data": false, 00:15:08.590 "copy": true, 00:15:08.590 "nvme_iov_md": false 00:15:08.590 }, 00:15:08.590 "memory_domains": [ 00:15:08.590 { 00:15:08.590 "dma_device_id": "system", 00:15:08.590 "dma_device_type": 1 00:15:08.590 }, 00:15:08.590 { 00:15:08.590 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:08.590 "dma_device_type": 2 00:15:08.590 } 00:15:08.590 ], 00:15:08.590 "driver_specific": {} 00:15:08.590 } 00:15:08.590 ] 00:15:08.590 13:35:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.590 13:35:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:08.590 13:35:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:08.590 13:35:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:08.590 13:35:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:15:08.590 13:35:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:08.590 13:35:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:08.590 13:35:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:08.590 13:35:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:08.590 13:35:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:08.590 13:35:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:08.590 13:35:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:08.590 13:35:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:08.590 13:35:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:08.590 13:35:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:08.590 13:35:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:08.590 13:35:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.590 13:35:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:08.590 13:35:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.590 13:35:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:08.590 "name": "Existed_Raid", 00:15:08.590 "uuid": "1a75bf5c-6cd1-46e2-9747-9d40acce9186", 00:15:08.590 "strip_size_kb": 0, 00:15:08.590 "state": "online", 00:15:08.590 "raid_level": "raid1", 00:15:08.590 "superblock": true, 00:15:08.590 "num_base_bdevs": 3, 00:15:08.590 "num_base_bdevs_discovered": 3, 00:15:08.590 "num_base_bdevs_operational": 3, 00:15:08.590 "base_bdevs_list": [ 00:15:08.590 { 00:15:08.590 "name": "BaseBdev1", 00:15:08.590 "uuid": "2d398a91-7aed-48c7-ad6a-ad9a7ddce444", 00:15:08.590 "is_configured": true, 00:15:08.590 "data_offset": 2048, 00:15:08.590 "data_size": 63488 00:15:08.590 }, 00:15:08.590 { 00:15:08.590 "name": "BaseBdev2", 00:15:08.590 "uuid": "7bcedefa-676b-4def-ace8-72ca234de2b1", 00:15:08.590 "is_configured": true, 00:15:08.590 "data_offset": 2048, 00:15:08.590 "data_size": 63488 00:15:08.590 }, 00:15:08.590 { 00:15:08.590 "name": "BaseBdev3", 00:15:08.590 "uuid": "7926c2cc-cbd3-49d1-ba92-1443d7921cf9", 00:15:08.590 "is_configured": true, 00:15:08.590 "data_offset": 2048, 00:15:08.590 "data_size": 63488 00:15:08.590 } 00:15:08.590 ] 00:15:08.590 }' 00:15:08.590 13:35:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:08.590 13:35:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:08.848 13:35:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:15:08.848 13:35:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:08.848 13:35:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:08.848 13:35:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:08.848 13:35:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:15:08.848 13:35:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:08.848 13:35:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:08.848 13:35:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.848 13:35:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:08.848 13:35:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:08.848 [2024-11-20 13:35:08.260218] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:08.848 13:35:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.848 13:35:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:08.848 "name": "Existed_Raid", 00:15:08.848 "aliases": [ 00:15:08.848 "1a75bf5c-6cd1-46e2-9747-9d40acce9186" 00:15:08.849 ], 00:15:08.849 "product_name": "Raid Volume", 00:15:08.849 "block_size": 512, 00:15:08.849 "num_blocks": 63488, 00:15:08.849 "uuid": "1a75bf5c-6cd1-46e2-9747-9d40acce9186", 00:15:08.849 "assigned_rate_limits": { 00:15:08.849 "rw_ios_per_sec": 0, 00:15:08.849 "rw_mbytes_per_sec": 0, 00:15:08.849 "r_mbytes_per_sec": 0, 00:15:08.849 "w_mbytes_per_sec": 0 00:15:08.849 }, 00:15:08.849 "claimed": false, 00:15:08.849 "zoned": false, 00:15:08.849 "supported_io_types": { 00:15:08.849 "read": true, 00:15:08.849 "write": true, 00:15:08.849 "unmap": false, 00:15:08.849 "flush": false, 00:15:08.849 "reset": true, 00:15:08.849 "nvme_admin": false, 00:15:08.849 "nvme_io": false, 00:15:08.849 "nvme_io_md": false, 00:15:08.849 "write_zeroes": true, 00:15:08.849 "zcopy": false, 00:15:08.849 "get_zone_info": false, 00:15:08.849 "zone_management": false, 00:15:08.849 "zone_append": false, 00:15:08.849 "compare": false, 00:15:08.849 "compare_and_write": false, 00:15:08.849 "abort": false, 00:15:08.849 "seek_hole": false, 00:15:08.849 "seek_data": false, 00:15:08.849 "copy": false, 00:15:08.849 "nvme_iov_md": false 00:15:08.849 }, 00:15:08.849 "memory_domains": [ 00:15:08.849 { 00:15:08.849 "dma_device_id": "system", 00:15:08.849 "dma_device_type": 1 00:15:08.849 }, 00:15:08.849 { 00:15:08.849 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:08.849 "dma_device_type": 2 00:15:08.849 }, 00:15:08.849 { 00:15:08.849 "dma_device_id": "system", 00:15:08.849 "dma_device_type": 1 00:15:08.849 }, 00:15:08.849 { 00:15:08.849 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:08.849 "dma_device_type": 2 00:15:08.849 }, 00:15:08.849 { 00:15:08.849 "dma_device_id": "system", 00:15:08.849 "dma_device_type": 1 00:15:08.849 }, 00:15:08.849 { 00:15:08.849 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:08.849 "dma_device_type": 2 00:15:08.849 } 00:15:08.849 ], 00:15:08.849 "driver_specific": { 00:15:08.849 "raid": { 00:15:08.849 "uuid": "1a75bf5c-6cd1-46e2-9747-9d40acce9186", 00:15:08.849 "strip_size_kb": 0, 00:15:08.849 "state": "online", 00:15:08.849 "raid_level": "raid1", 00:15:08.849 "superblock": true, 00:15:08.849 "num_base_bdevs": 3, 00:15:08.849 "num_base_bdevs_discovered": 3, 00:15:08.849 "num_base_bdevs_operational": 3, 00:15:08.849 "base_bdevs_list": [ 00:15:08.849 { 00:15:08.849 "name": "BaseBdev1", 00:15:08.849 "uuid": "2d398a91-7aed-48c7-ad6a-ad9a7ddce444", 00:15:08.849 "is_configured": true, 00:15:08.849 "data_offset": 2048, 00:15:08.849 "data_size": 63488 00:15:08.849 }, 00:15:08.849 { 00:15:08.849 "name": "BaseBdev2", 00:15:08.849 "uuid": "7bcedefa-676b-4def-ace8-72ca234de2b1", 00:15:08.849 "is_configured": true, 00:15:08.849 "data_offset": 2048, 00:15:08.849 "data_size": 63488 00:15:08.849 }, 00:15:08.849 { 00:15:08.849 "name": "BaseBdev3", 00:15:08.849 "uuid": "7926c2cc-cbd3-49d1-ba92-1443d7921cf9", 00:15:08.849 "is_configured": true, 00:15:08.849 "data_offset": 2048, 00:15:08.849 "data_size": 63488 00:15:08.849 } 00:15:08.849 ] 00:15:08.849 } 00:15:08.849 } 00:15:08.849 }' 00:15:08.849 13:35:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:08.849 13:35:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:15:08.849 BaseBdev2 00:15:08.849 BaseBdev3' 00:15:08.849 13:35:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:09.108 13:35:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:09.108 13:35:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:09.108 13:35:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:15:09.108 13:35:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.108 13:35:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:09.108 13:35:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:09.108 13:35:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.108 13:35:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:09.108 13:35:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:09.108 13:35:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:09.108 13:35:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:09.108 13:35:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.108 13:35:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:09.108 13:35:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:09.108 13:35:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.108 13:35:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:09.108 13:35:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:09.108 13:35:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:09.108 13:35:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:09.108 13:35:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.108 13:35:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:09.108 13:35:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:09.108 13:35:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.108 13:35:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:09.108 13:35:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:09.108 13:35:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:09.108 13:35:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.108 13:35:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:09.108 [2024-11-20 13:35:08.527602] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:09.366 13:35:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.366 13:35:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:15:09.366 13:35:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:15:09.366 13:35:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:09.366 13:35:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:15:09.366 13:35:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:15:09.366 13:35:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:15:09.366 13:35:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:09.366 13:35:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:09.366 13:35:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:09.366 13:35:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:09.366 13:35:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:09.366 13:35:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:09.366 13:35:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:09.366 13:35:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:09.366 13:35:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:09.366 13:35:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:09.366 13:35:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:09.366 13:35:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.367 13:35:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:09.367 13:35:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.367 13:35:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:09.367 "name": "Existed_Raid", 00:15:09.367 "uuid": "1a75bf5c-6cd1-46e2-9747-9d40acce9186", 00:15:09.367 "strip_size_kb": 0, 00:15:09.367 "state": "online", 00:15:09.367 "raid_level": "raid1", 00:15:09.367 "superblock": true, 00:15:09.367 "num_base_bdevs": 3, 00:15:09.367 "num_base_bdevs_discovered": 2, 00:15:09.367 "num_base_bdevs_operational": 2, 00:15:09.367 "base_bdevs_list": [ 00:15:09.367 { 00:15:09.367 "name": null, 00:15:09.367 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:09.367 "is_configured": false, 00:15:09.367 "data_offset": 0, 00:15:09.367 "data_size": 63488 00:15:09.367 }, 00:15:09.367 { 00:15:09.367 "name": "BaseBdev2", 00:15:09.367 "uuid": "7bcedefa-676b-4def-ace8-72ca234de2b1", 00:15:09.367 "is_configured": true, 00:15:09.367 "data_offset": 2048, 00:15:09.367 "data_size": 63488 00:15:09.367 }, 00:15:09.367 { 00:15:09.367 "name": "BaseBdev3", 00:15:09.367 "uuid": "7926c2cc-cbd3-49d1-ba92-1443d7921cf9", 00:15:09.367 "is_configured": true, 00:15:09.367 "data_offset": 2048, 00:15:09.367 "data_size": 63488 00:15:09.367 } 00:15:09.367 ] 00:15:09.367 }' 00:15:09.367 13:35:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:09.367 13:35:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:09.625 13:35:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:15:09.625 13:35:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:09.625 13:35:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:09.625 13:35:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:09.625 13:35:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.625 13:35:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:09.625 13:35:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.625 13:35:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:09.625 13:35:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:09.625 13:35:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:15:09.625 13:35:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.625 13:35:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:09.625 [2024-11-20 13:35:09.086501] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:09.883 13:35:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.883 13:35:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:09.883 13:35:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:09.883 13:35:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:09.883 13:35:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.883 13:35:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:09.883 13:35:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:09.883 13:35:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.883 13:35:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:09.883 13:35:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:09.883 13:35:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:15:09.883 13:35:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.883 13:35:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:09.883 [2024-11-20 13:35:09.240167] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:09.883 [2024-11-20 13:35:09.240435] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:09.883 [2024-11-20 13:35:09.336269] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:09.883 [2024-11-20 13:35:09.336553] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:09.883 [2024-11-20 13:35:09.336672] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:15:09.883 13:35:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.883 13:35:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:09.883 13:35:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:09.883 13:35:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:09.883 13:35:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:15:09.883 13:35:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.883 13:35:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:09.883 13:35:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.140 13:35:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:15:10.140 13:35:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:15:10.140 13:35:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:15:10.140 13:35:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:15:10.140 13:35:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:10.140 13:35:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:10.140 13:35:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.140 13:35:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:10.140 BaseBdev2 00:15:10.140 13:35:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.140 13:35:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:15:10.140 13:35:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:15:10.140 13:35:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:10.140 13:35:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:10.140 13:35:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:10.140 13:35:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:10.140 13:35:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:10.140 13:35:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.140 13:35:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:10.141 13:35:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.141 13:35:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:10.141 13:35:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.141 13:35:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:10.141 [ 00:15:10.141 { 00:15:10.141 "name": "BaseBdev2", 00:15:10.141 "aliases": [ 00:15:10.141 "cc388678-c9fa-48c6-84b1-7f55be493e4a" 00:15:10.141 ], 00:15:10.141 "product_name": "Malloc disk", 00:15:10.141 "block_size": 512, 00:15:10.141 "num_blocks": 65536, 00:15:10.141 "uuid": "cc388678-c9fa-48c6-84b1-7f55be493e4a", 00:15:10.141 "assigned_rate_limits": { 00:15:10.141 "rw_ios_per_sec": 0, 00:15:10.141 "rw_mbytes_per_sec": 0, 00:15:10.141 "r_mbytes_per_sec": 0, 00:15:10.141 "w_mbytes_per_sec": 0 00:15:10.141 }, 00:15:10.141 "claimed": false, 00:15:10.141 "zoned": false, 00:15:10.141 "supported_io_types": { 00:15:10.141 "read": true, 00:15:10.141 "write": true, 00:15:10.141 "unmap": true, 00:15:10.141 "flush": true, 00:15:10.141 "reset": true, 00:15:10.141 "nvme_admin": false, 00:15:10.141 "nvme_io": false, 00:15:10.141 "nvme_io_md": false, 00:15:10.141 "write_zeroes": true, 00:15:10.141 "zcopy": true, 00:15:10.141 "get_zone_info": false, 00:15:10.141 "zone_management": false, 00:15:10.141 "zone_append": false, 00:15:10.141 "compare": false, 00:15:10.141 "compare_and_write": false, 00:15:10.141 "abort": true, 00:15:10.141 "seek_hole": false, 00:15:10.141 "seek_data": false, 00:15:10.141 "copy": true, 00:15:10.141 "nvme_iov_md": false 00:15:10.141 }, 00:15:10.141 "memory_domains": [ 00:15:10.141 { 00:15:10.141 "dma_device_id": "system", 00:15:10.141 "dma_device_type": 1 00:15:10.141 }, 00:15:10.141 { 00:15:10.141 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:10.141 "dma_device_type": 2 00:15:10.141 } 00:15:10.141 ], 00:15:10.141 "driver_specific": {} 00:15:10.141 } 00:15:10.141 ] 00:15:10.141 13:35:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.141 13:35:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:10.141 13:35:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:10.141 13:35:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:10.141 13:35:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:10.141 13:35:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.141 13:35:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:10.141 BaseBdev3 00:15:10.141 13:35:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.141 13:35:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:15:10.141 13:35:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:15:10.141 13:35:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:10.141 13:35:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:10.141 13:35:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:10.141 13:35:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:10.141 13:35:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:10.141 13:35:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.141 13:35:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:10.141 13:35:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.141 13:35:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:10.141 13:35:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.141 13:35:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:10.141 [ 00:15:10.141 { 00:15:10.141 "name": "BaseBdev3", 00:15:10.141 "aliases": [ 00:15:10.141 "63dbbca0-ed8e-485e-aad7-f2a275a809dc" 00:15:10.141 ], 00:15:10.141 "product_name": "Malloc disk", 00:15:10.141 "block_size": 512, 00:15:10.141 "num_blocks": 65536, 00:15:10.141 "uuid": "63dbbca0-ed8e-485e-aad7-f2a275a809dc", 00:15:10.141 "assigned_rate_limits": { 00:15:10.141 "rw_ios_per_sec": 0, 00:15:10.141 "rw_mbytes_per_sec": 0, 00:15:10.141 "r_mbytes_per_sec": 0, 00:15:10.141 "w_mbytes_per_sec": 0 00:15:10.141 }, 00:15:10.141 "claimed": false, 00:15:10.141 "zoned": false, 00:15:10.141 "supported_io_types": { 00:15:10.141 "read": true, 00:15:10.141 "write": true, 00:15:10.141 "unmap": true, 00:15:10.141 "flush": true, 00:15:10.141 "reset": true, 00:15:10.141 "nvme_admin": false, 00:15:10.141 "nvme_io": false, 00:15:10.141 "nvme_io_md": false, 00:15:10.141 "write_zeroes": true, 00:15:10.141 "zcopy": true, 00:15:10.141 "get_zone_info": false, 00:15:10.141 "zone_management": false, 00:15:10.141 "zone_append": false, 00:15:10.141 "compare": false, 00:15:10.141 "compare_and_write": false, 00:15:10.141 "abort": true, 00:15:10.141 "seek_hole": false, 00:15:10.141 "seek_data": false, 00:15:10.141 "copy": true, 00:15:10.141 "nvme_iov_md": false 00:15:10.141 }, 00:15:10.141 "memory_domains": [ 00:15:10.141 { 00:15:10.141 "dma_device_id": "system", 00:15:10.141 "dma_device_type": 1 00:15:10.141 }, 00:15:10.141 { 00:15:10.141 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:10.141 "dma_device_type": 2 00:15:10.141 } 00:15:10.141 ], 00:15:10.141 "driver_specific": {} 00:15:10.141 } 00:15:10.141 ] 00:15:10.141 13:35:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.141 13:35:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:10.141 13:35:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:10.141 13:35:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:10.141 13:35:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:10.141 13:35:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.141 13:35:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:10.141 [2024-11-20 13:35:09.540317] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:10.141 [2024-11-20 13:35:09.540508] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:10.141 [2024-11-20 13:35:09.540551] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:10.141 [2024-11-20 13:35:09.542770] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:10.141 13:35:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.141 13:35:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:15:10.141 13:35:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:10.141 13:35:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:10.141 13:35:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:10.141 13:35:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:10.141 13:35:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:10.141 13:35:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:10.141 13:35:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:10.141 13:35:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:10.141 13:35:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:10.141 13:35:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:10.141 13:35:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.141 13:35:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:10.141 13:35:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:10.141 13:35:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.141 13:35:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:10.141 "name": "Existed_Raid", 00:15:10.141 "uuid": "a7d625e5-e7e1-427f-ae63-ad018620b567", 00:15:10.141 "strip_size_kb": 0, 00:15:10.141 "state": "configuring", 00:15:10.141 "raid_level": "raid1", 00:15:10.141 "superblock": true, 00:15:10.141 "num_base_bdevs": 3, 00:15:10.141 "num_base_bdevs_discovered": 2, 00:15:10.141 "num_base_bdevs_operational": 3, 00:15:10.141 "base_bdevs_list": [ 00:15:10.141 { 00:15:10.141 "name": "BaseBdev1", 00:15:10.141 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:10.141 "is_configured": false, 00:15:10.141 "data_offset": 0, 00:15:10.141 "data_size": 0 00:15:10.141 }, 00:15:10.141 { 00:15:10.141 "name": "BaseBdev2", 00:15:10.141 "uuid": "cc388678-c9fa-48c6-84b1-7f55be493e4a", 00:15:10.141 "is_configured": true, 00:15:10.141 "data_offset": 2048, 00:15:10.141 "data_size": 63488 00:15:10.141 }, 00:15:10.141 { 00:15:10.141 "name": "BaseBdev3", 00:15:10.141 "uuid": "63dbbca0-ed8e-485e-aad7-f2a275a809dc", 00:15:10.141 "is_configured": true, 00:15:10.141 "data_offset": 2048, 00:15:10.141 "data_size": 63488 00:15:10.141 } 00:15:10.141 ] 00:15:10.141 }' 00:15:10.141 13:35:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:10.142 13:35:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:10.709 13:35:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:15:10.709 13:35:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.709 13:35:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:10.709 [2024-11-20 13:35:09.923803] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:10.709 13:35:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.709 13:35:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:15:10.709 13:35:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:10.709 13:35:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:10.709 13:35:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:10.709 13:35:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:10.709 13:35:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:10.709 13:35:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:10.709 13:35:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:10.709 13:35:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:10.709 13:35:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:10.709 13:35:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:10.709 13:35:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.709 13:35:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:10.709 13:35:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:10.709 13:35:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.709 13:35:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:10.709 "name": "Existed_Raid", 00:15:10.709 "uuid": "a7d625e5-e7e1-427f-ae63-ad018620b567", 00:15:10.709 "strip_size_kb": 0, 00:15:10.709 "state": "configuring", 00:15:10.709 "raid_level": "raid1", 00:15:10.709 "superblock": true, 00:15:10.709 "num_base_bdevs": 3, 00:15:10.709 "num_base_bdevs_discovered": 1, 00:15:10.709 "num_base_bdevs_operational": 3, 00:15:10.709 "base_bdevs_list": [ 00:15:10.709 { 00:15:10.709 "name": "BaseBdev1", 00:15:10.709 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:10.709 "is_configured": false, 00:15:10.709 "data_offset": 0, 00:15:10.709 "data_size": 0 00:15:10.709 }, 00:15:10.709 { 00:15:10.709 "name": null, 00:15:10.709 "uuid": "cc388678-c9fa-48c6-84b1-7f55be493e4a", 00:15:10.709 "is_configured": false, 00:15:10.709 "data_offset": 0, 00:15:10.709 "data_size": 63488 00:15:10.709 }, 00:15:10.709 { 00:15:10.709 "name": "BaseBdev3", 00:15:10.709 "uuid": "63dbbca0-ed8e-485e-aad7-f2a275a809dc", 00:15:10.709 "is_configured": true, 00:15:10.709 "data_offset": 2048, 00:15:10.709 "data_size": 63488 00:15:10.709 } 00:15:10.709 ] 00:15:10.709 }' 00:15:10.709 13:35:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:10.709 13:35:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:10.968 13:35:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:10.968 13:35:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:10.968 13:35:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.968 13:35:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:10.968 13:35:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.968 13:35:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:15:10.968 13:35:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:10.968 13:35:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.968 13:35:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:11.228 [2024-11-20 13:35:10.454179] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:11.228 BaseBdev1 00:15:11.228 13:35:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.228 13:35:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:15:11.228 13:35:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:15:11.228 13:35:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:11.228 13:35:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:11.228 13:35:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:11.228 13:35:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:11.228 13:35:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:11.228 13:35:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.228 13:35:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:11.228 13:35:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.228 13:35:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:11.228 13:35:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.228 13:35:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:11.228 [ 00:15:11.228 { 00:15:11.228 "name": "BaseBdev1", 00:15:11.228 "aliases": [ 00:15:11.228 "a168f591-ef56-4a61-a437-77a5561deb52" 00:15:11.228 ], 00:15:11.228 "product_name": "Malloc disk", 00:15:11.228 "block_size": 512, 00:15:11.228 "num_blocks": 65536, 00:15:11.228 "uuid": "a168f591-ef56-4a61-a437-77a5561deb52", 00:15:11.228 "assigned_rate_limits": { 00:15:11.228 "rw_ios_per_sec": 0, 00:15:11.228 "rw_mbytes_per_sec": 0, 00:15:11.228 "r_mbytes_per_sec": 0, 00:15:11.228 "w_mbytes_per_sec": 0 00:15:11.228 }, 00:15:11.228 "claimed": true, 00:15:11.228 "claim_type": "exclusive_write", 00:15:11.228 "zoned": false, 00:15:11.228 "supported_io_types": { 00:15:11.228 "read": true, 00:15:11.228 "write": true, 00:15:11.228 "unmap": true, 00:15:11.228 "flush": true, 00:15:11.228 "reset": true, 00:15:11.228 "nvme_admin": false, 00:15:11.228 "nvme_io": false, 00:15:11.228 "nvme_io_md": false, 00:15:11.228 "write_zeroes": true, 00:15:11.228 "zcopy": true, 00:15:11.228 "get_zone_info": false, 00:15:11.228 "zone_management": false, 00:15:11.228 "zone_append": false, 00:15:11.228 "compare": false, 00:15:11.228 "compare_and_write": false, 00:15:11.228 "abort": true, 00:15:11.228 "seek_hole": false, 00:15:11.228 "seek_data": false, 00:15:11.228 "copy": true, 00:15:11.228 "nvme_iov_md": false 00:15:11.228 }, 00:15:11.228 "memory_domains": [ 00:15:11.228 { 00:15:11.228 "dma_device_id": "system", 00:15:11.228 "dma_device_type": 1 00:15:11.228 }, 00:15:11.228 { 00:15:11.228 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:11.228 "dma_device_type": 2 00:15:11.228 } 00:15:11.228 ], 00:15:11.228 "driver_specific": {} 00:15:11.228 } 00:15:11.228 ] 00:15:11.228 13:35:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.228 13:35:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:11.228 13:35:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:15:11.228 13:35:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:11.228 13:35:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:11.228 13:35:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:11.228 13:35:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:11.228 13:35:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:11.228 13:35:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:11.228 13:35:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:11.228 13:35:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:11.228 13:35:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:11.228 13:35:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:11.228 13:35:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:11.228 13:35:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.228 13:35:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:11.228 13:35:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.228 13:35:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:11.228 "name": "Existed_Raid", 00:15:11.228 "uuid": "a7d625e5-e7e1-427f-ae63-ad018620b567", 00:15:11.228 "strip_size_kb": 0, 00:15:11.228 "state": "configuring", 00:15:11.228 "raid_level": "raid1", 00:15:11.228 "superblock": true, 00:15:11.228 "num_base_bdevs": 3, 00:15:11.228 "num_base_bdevs_discovered": 2, 00:15:11.228 "num_base_bdevs_operational": 3, 00:15:11.229 "base_bdevs_list": [ 00:15:11.229 { 00:15:11.229 "name": "BaseBdev1", 00:15:11.229 "uuid": "a168f591-ef56-4a61-a437-77a5561deb52", 00:15:11.229 "is_configured": true, 00:15:11.229 "data_offset": 2048, 00:15:11.229 "data_size": 63488 00:15:11.229 }, 00:15:11.229 { 00:15:11.229 "name": null, 00:15:11.229 "uuid": "cc388678-c9fa-48c6-84b1-7f55be493e4a", 00:15:11.229 "is_configured": false, 00:15:11.229 "data_offset": 0, 00:15:11.229 "data_size": 63488 00:15:11.229 }, 00:15:11.229 { 00:15:11.229 "name": "BaseBdev3", 00:15:11.229 "uuid": "63dbbca0-ed8e-485e-aad7-f2a275a809dc", 00:15:11.229 "is_configured": true, 00:15:11.229 "data_offset": 2048, 00:15:11.229 "data_size": 63488 00:15:11.229 } 00:15:11.229 ] 00:15:11.229 }' 00:15:11.229 13:35:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:11.229 13:35:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:11.489 13:35:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:11.489 13:35:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.489 13:35:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:11.489 13:35:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:11.489 13:35:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.489 13:35:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:15:11.489 13:35:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:15:11.489 13:35:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.489 13:35:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:11.489 [2024-11-20 13:35:10.969820] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:11.748 13:35:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.748 13:35:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:15:11.748 13:35:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:11.748 13:35:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:11.748 13:35:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:11.748 13:35:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:11.748 13:35:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:11.748 13:35:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:11.748 13:35:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:11.748 13:35:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:11.748 13:35:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:11.748 13:35:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:11.748 13:35:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:11.748 13:35:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.748 13:35:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:11.748 13:35:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.748 13:35:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:11.748 "name": "Existed_Raid", 00:15:11.748 "uuid": "a7d625e5-e7e1-427f-ae63-ad018620b567", 00:15:11.748 "strip_size_kb": 0, 00:15:11.748 "state": "configuring", 00:15:11.748 "raid_level": "raid1", 00:15:11.748 "superblock": true, 00:15:11.748 "num_base_bdevs": 3, 00:15:11.748 "num_base_bdevs_discovered": 1, 00:15:11.748 "num_base_bdevs_operational": 3, 00:15:11.748 "base_bdevs_list": [ 00:15:11.748 { 00:15:11.748 "name": "BaseBdev1", 00:15:11.748 "uuid": "a168f591-ef56-4a61-a437-77a5561deb52", 00:15:11.748 "is_configured": true, 00:15:11.748 "data_offset": 2048, 00:15:11.748 "data_size": 63488 00:15:11.748 }, 00:15:11.748 { 00:15:11.748 "name": null, 00:15:11.748 "uuid": "cc388678-c9fa-48c6-84b1-7f55be493e4a", 00:15:11.748 "is_configured": false, 00:15:11.748 "data_offset": 0, 00:15:11.748 "data_size": 63488 00:15:11.748 }, 00:15:11.748 { 00:15:11.748 "name": null, 00:15:11.748 "uuid": "63dbbca0-ed8e-485e-aad7-f2a275a809dc", 00:15:11.748 "is_configured": false, 00:15:11.748 "data_offset": 0, 00:15:11.748 "data_size": 63488 00:15:11.748 } 00:15:11.748 ] 00:15:11.748 }' 00:15:11.748 13:35:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:11.748 13:35:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:12.007 13:35:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:12.007 13:35:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:12.007 13:35:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.007 13:35:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:12.007 13:35:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.007 13:35:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:15:12.008 13:35:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:15:12.008 13:35:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.008 13:35:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:12.008 [2024-11-20 13:35:11.469230] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:12.008 13:35:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.008 13:35:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:15:12.008 13:35:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:12.008 13:35:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:12.008 13:35:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:12.008 13:35:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:12.008 13:35:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:12.008 13:35:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:12.008 13:35:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:12.008 13:35:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:12.008 13:35:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:12.008 13:35:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:12.008 13:35:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.008 13:35:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:12.008 13:35:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:12.266 13:35:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.266 13:35:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:12.266 "name": "Existed_Raid", 00:15:12.266 "uuid": "a7d625e5-e7e1-427f-ae63-ad018620b567", 00:15:12.266 "strip_size_kb": 0, 00:15:12.266 "state": "configuring", 00:15:12.266 "raid_level": "raid1", 00:15:12.266 "superblock": true, 00:15:12.266 "num_base_bdevs": 3, 00:15:12.266 "num_base_bdevs_discovered": 2, 00:15:12.266 "num_base_bdevs_operational": 3, 00:15:12.266 "base_bdevs_list": [ 00:15:12.266 { 00:15:12.266 "name": "BaseBdev1", 00:15:12.266 "uuid": "a168f591-ef56-4a61-a437-77a5561deb52", 00:15:12.266 "is_configured": true, 00:15:12.266 "data_offset": 2048, 00:15:12.266 "data_size": 63488 00:15:12.266 }, 00:15:12.266 { 00:15:12.266 "name": null, 00:15:12.266 "uuid": "cc388678-c9fa-48c6-84b1-7f55be493e4a", 00:15:12.266 "is_configured": false, 00:15:12.266 "data_offset": 0, 00:15:12.266 "data_size": 63488 00:15:12.266 }, 00:15:12.266 { 00:15:12.266 "name": "BaseBdev3", 00:15:12.266 "uuid": "63dbbca0-ed8e-485e-aad7-f2a275a809dc", 00:15:12.266 "is_configured": true, 00:15:12.266 "data_offset": 2048, 00:15:12.266 "data_size": 63488 00:15:12.266 } 00:15:12.266 ] 00:15:12.266 }' 00:15:12.266 13:35:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:12.266 13:35:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:12.525 13:35:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:12.525 13:35:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:12.525 13:35:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.525 13:35:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:12.525 13:35:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.525 13:35:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:15:12.525 13:35:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:12.525 13:35:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.525 13:35:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:12.525 [2024-11-20 13:35:11.952613] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:12.784 13:35:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.784 13:35:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:15:12.784 13:35:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:12.784 13:35:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:12.784 13:35:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:12.784 13:35:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:12.784 13:35:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:12.784 13:35:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:12.784 13:35:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:12.784 13:35:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:12.784 13:35:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:12.784 13:35:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:12.784 13:35:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:12.784 13:35:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.784 13:35:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:12.784 13:35:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.784 13:35:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:12.784 "name": "Existed_Raid", 00:15:12.784 "uuid": "a7d625e5-e7e1-427f-ae63-ad018620b567", 00:15:12.784 "strip_size_kb": 0, 00:15:12.784 "state": "configuring", 00:15:12.784 "raid_level": "raid1", 00:15:12.784 "superblock": true, 00:15:12.784 "num_base_bdevs": 3, 00:15:12.784 "num_base_bdevs_discovered": 1, 00:15:12.784 "num_base_bdevs_operational": 3, 00:15:12.784 "base_bdevs_list": [ 00:15:12.784 { 00:15:12.784 "name": null, 00:15:12.784 "uuid": "a168f591-ef56-4a61-a437-77a5561deb52", 00:15:12.784 "is_configured": false, 00:15:12.784 "data_offset": 0, 00:15:12.784 "data_size": 63488 00:15:12.784 }, 00:15:12.784 { 00:15:12.784 "name": null, 00:15:12.784 "uuid": "cc388678-c9fa-48c6-84b1-7f55be493e4a", 00:15:12.784 "is_configured": false, 00:15:12.784 "data_offset": 0, 00:15:12.784 "data_size": 63488 00:15:12.784 }, 00:15:12.784 { 00:15:12.784 "name": "BaseBdev3", 00:15:12.784 "uuid": "63dbbca0-ed8e-485e-aad7-f2a275a809dc", 00:15:12.784 "is_configured": true, 00:15:12.784 "data_offset": 2048, 00:15:12.784 "data_size": 63488 00:15:12.784 } 00:15:12.784 ] 00:15:12.784 }' 00:15:12.784 13:35:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:12.784 13:35:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:13.044 13:35:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:13.044 13:35:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:13.044 13:35:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.044 13:35:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:13.044 13:35:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.044 13:35:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:15:13.044 13:35:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:15:13.044 13:35:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.044 13:35:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:13.044 [2024-11-20 13:35:12.461201] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:13.044 13:35:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.044 13:35:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:15:13.044 13:35:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:13.044 13:35:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:13.044 13:35:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:13.044 13:35:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:13.044 13:35:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:13.044 13:35:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:13.044 13:35:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:13.044 13:35:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:13.044 13:35:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:13.044 13:35:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:13.044 13:35:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.044 13:35:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:13.044 13:35:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:13.045 13:35:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.045 13:35:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:13.045 "name": "Existed_Raid", 00:15:13.045 "uuid": "a7d625e5-e7e1-427f-ae63-ad018620b567", 00:15:13.045 "strip_size_kb": 0, 00:15:13.045 "state": "configuring", 00:15:13.045 "raid_level": "raid1", 00:15:13.045 "superblock": true, 00:15:13.045 "num_base_bdevs": 3, 00:15:13.045 "num_base_bdevs_discovered": 2, 00:15:13.045 "num_base_bdevs_operational": 3, 00:15:13.045 "base_bdevs_list": [ 00:15:13.045 { 00:15:13.045 "name": null, 00:15:13.045 "uuid": "a168f591-ef56-4a61-a437-77a5561deb52", 00:15:13.045 "is_configured": false, 00:15:13.045 "data_offset": 0, 00:15:13.045 "data_size": 63488 00:15:13.045 }, 00:15:13.045 { 00:15:13.045 "name": "BaseBdev2", 00:15:13.045 "uuid": "cc388678-c9fa-48c6-84b1-7f55be493e4a", 00:15:13.045 "is_configured": true, 00:15:13.045 "data_offset": 2048, 00:15:13.045 "data_size": 63488 00:15:13.045 }, 00:15:13.045 { 00:15:13.045 "name": "BaseBdev3", 00:15:13.045 "uuid": "63dbbca0-ed8e-485e-aad7-f2a275a809dc", 00:15:13.045 "is_configured": true, 00:15:13.045 "data_offset": 2048, 00:15:13.045 "data_size": 63488 00:15:13.045 } 00:15:13.045 ] 00:15:13.045 }' 00:15:13.045 13:35:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:13.045 13:35:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:13.615 13:35:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:13.615 13:35:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:13.615 13:35:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.615 13:35:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:13.615 13:35:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.615 13:35:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:15:13.615 13:35:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:13.615 13:35:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:15:13.615 13:35:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.615 13:35:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:13.615 13:35:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.615 13:35:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u a168f591-ef56-4a61-a437-77a5561deb52 00:15:13.615 13:35:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.615 13:35:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:13.615 [2024-11-20 13:35:13.006597] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:15:13.615 [2024-11-20 13:35:13.006887] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:15:13.615 [2024-11-20 13:35:13.006905] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:13.615 [2024-11-20 13:35:13.007211] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:15:13.615 [2024-11-20 13:35:13.007358] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:15:13.615 [2024-11-20 13:35:13.007374] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:15:13.615 [2024-11-20 13:35:13.007528] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:13.615 NewBaseBdev 00:15:13.615 13:35:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.615 13:35:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:15:13.615 13:35:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:15:13.615 13:35:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:13.615 13:35:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:13.615 13:35:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:13.615 13:35:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:13.615 13:35:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:13.615 13:35:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.615 13:35:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:13.615 13:35:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.615 13:35:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:15:13.615 13:35:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.615 13:35:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:13.615 [ 00:15:13.615 { 00:15:13.615 "name": "NewBaseBdev", 00:15:13.615 "aliases": [ 00:15:13.615 "a168f591-ef56-4a61-a437-77a5561deb52" 00:15:13.615 ], 00:15:13.615 "product_name": "Malloc disk", 00:15:13.615 "block_size": 512, 00:15:13.615 "num_blocks": 65536, 00:15:13.615 "uuid": "a168f591-ef56-4a61-a437-77a5561deb52", 00:15:13.615 "assigned_rate_limits": { 00:15:13.615 "rw_ios_per_sec": 0, 00:15:13.615 "rw_mbytes_per_sec": 0, 00:15:13.615 "r_mbytes_per_sec": 0, 00:15:13.615 "w_mbytes_per_sec": 0 00:15:13.615 }, 00:15:13.615 "claimed": true, 00:15:13.615 "claim_type": "exclusive_write", 00:15:13.615 "zoned": false, 00:15:13.615 "supported_io_types": { 00:15:13.615 "read": true, 00:15:13.615 "write": true, 00:15:13.615 "unmap": true, 00:15:13.615 "flush": true, 00:15:13.615 "reset": true, 00:15:13.615 "nvme_admin": false, 00:15:13.615 "nvme_io": false, 00:15:13.615 "nvme_io_md": false, 00:15:13.615 "write_zeroes": true, 00:15:13.615 "zcopy": true, 00:15:13.615 "get_zone_info": false, 00:15:13.615 "zone_management": false, 00:15:13.615 "zone_append": false, 00:15:13.615 "compare": false, 00:15:13.615 "compare_and_write": false, 00:15:13.615 "abort": true, 00:15:13.615 "seek_hole": false, 00:15:13.615 "seek_data": false, 00:15:13.615 "copy": true, 00:15:13.615 "nvme_iov_md": false 00:15:13.615 }, 00:15:13.615 "memory_domains": [ 00:15:13.615 { 00:15:13.615 "dma_device_id": "system", 00:15:13.615 "dma_device_type": 1 00:15:13.615 }, 00:15:13.615 { 00:15:13.615 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:13.615 "dma_device_type": 2 00:15:13.615 } 00:15:13.615 ], 00:15:13.615 "driver_specific": {} 00:15:13.615 } 00:15:13.615 ] 00:15:13.615 13:35:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.615 13:35:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:13.615 13:35:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:15:13.615 13:35:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:13.615 13:35:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:13.615 13:35:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:13.615 13:35:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:13.615 13:35:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:13.615 13:35:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:13.615 13:35:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:13.615 13:35:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:13.615 13:35:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:13.615 13:35:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:13.615 13:35:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:13.615 13:35:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.615 13:35:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:13.615 13:35:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.615 13:35:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:13.615 "name": "Existed_Raid", 00:15:13.616 "uuid": "a7d625e5-e7e1-427f-ae63-ad018620b567", 00:15:13.616 "strip_size_kb": 0, 00:15:13.616 "state": "online", 00:15:13.616 "raid_level": "raid1", 00:15:13.616 "superblock": true, 00:15:13.616 "num_base_bdevs": 3, 00:15:13.616 "num_base_bdevs_discovered": 3, 00:15:13.616 "num_base_bdevs_operational": 3, 00:15:13.616 "base_bdevs_list": [ 00:15:13.616 { 00:15:13.616 "name": "NewBaseBdev", 00:15:13.616 "uuid": "a168f591-ef56-4a61-a437-77a5561deb52", 00:15:13.616 "is_configured": true, 00:15:13.616 "data_offset": 2048, 00:15:13.616 "data_size": 63488 00:15:13.616 }, 00:15:13.616 { 00:15:13.616 "name": "BaseBdev2", 00:15:13.616 "uuid": "cc388678-c9fa-48c6-84b1-7f55be493e4a", 00:15:13.616 "is_configured": true, 00:15:13.616 "data_offset": 2048, 00:15:13.616 "data_size": 63488 00:15:13.616 }, 00:15:13.616 { 00:15:13.616 "name": "BaseBdev3", 00:15:13.616 "uuid": "63dbbca0-ed8e-485e-aad7-f2a275a809dc", 00:15:13.616 "is_configured": true, 00:15:13.616 "data_offset": 2048, 00:15:13.616 "data_size": 63488 00:15:13.616 } 00:15:13.616 ] 00:15:13.616 }' 00:15:13.616 13:35:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:13.616 13:35:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:14.183 13:35:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:15:14.183 13:35:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:14.183 13:35:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:14.183 13:35:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:14.183 13:35:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:15:14.183 13:35:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:14.183 13:35:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:14.183 13:35:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:14.183 13:35:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.183 13:35:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:14.183 [2024-11-20 13:35:13.470529] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:14.183 13:35:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.183 13:35:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:14.183 "name": "Existed_Raid", 00:15:14.183 "aliases": [ 00:15:14.183 "a7d625e5-e7e1-427f-ae63-ad018620b567" 00:15:14.183 ], 00:15:14.183 "product_name": "Raid Volume", 00:15:14.183 "block_size": 512, 00:15:14.183 "num_blocks": 63488, 00:15:14.183 "uuid": "a7d625e5-e7e1-427f-ae63-ad018620b567", 00:15:14.183 "assigned_rate_limits": { 00:15:14.183 "rw_ios_per_sec": 0, 00:15:14.183 "rw_mbytes_per_sec": 0, 00:15:14.183 "r_mbytes_per_sec": 0, 00:15:14.183 "w_mbytes_per_sec": 0 00:15:14.183 }, 00:15:14.183 "claimed": false, 00:15:14.183 "zoned": false, 00:15:14.183 "supported_io_types": { 00:15:14.183 "read": true, 00:15:14.183 "write": true, 00:15:14.183 "unmap": false, 00:15:14.183 "flush": false, 00:15:14.183 "reset": true, 00:15:14.183 "nvme_admin": false, 00:15:14.183 "nvme_io": false, 00:15:14.183 "nvme_io_md": false, 00:15:14.183 "write_zeroes": true, 00:15:14.183 "zcopy": false, 00:15:14.183 "get_zone_info": false, 00:15:14.183 "zone_management": false, 00:15:14.183 "zone_append": false, 00:15:14.183 "compare": false, 00:15:14.183 "compare_and_write": false, 00:15:14.183 "abort": false, 00:15:14.183 "seek_hole": false, 00:15:14.183 "seek_data": false, 00:15:14.183 "copy": false, 00:15:14.183 "nvme_iov_md": false 00:15:14.183 }, 00:15:14.183 "memory_domains": [ 00:15:14.183 { 00:15:14.183 "dma_device_id": "system", 00:15:14.183 "dma_device_type": 1 00:15:14.183 }, 00:15:14.183 { 00:15:14.183 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:14.183 "dma_device_type": 2 00:15:14.183 }, 00:15:14.183 { 00:15:14.183 "dma_device_id": "system", 00:15:14.183 "dma_device_type": 1 00:15:14.183 }, 00:15:14.183 { 00:15:14.183 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:14.183 "dma_device_type": 2 00:15:14.183 }, 00:15:14.183 { 00:15:14.183 "dma_device_id": "system", 00:15:14.183 "dma_device_type": 1 00:15:14.183 }, 00:15:14.183 { 00:15:14.183 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:14.183 "dma_device_type": 2 00:15:14.183 } 00:15:14.183 ], 00:15:14.183 "driver_specific": { 00:15:14.183 "raid": { 00:15:14.183 "uuid": "a7d625e5-e7e1-427f-ae63-ad018620b567", 00:15:14.183 "strip_size_kb": 0, 00:15:14.183 "state": "online", 00:15:14.183 "raid_level": "raid1", 00:15:14.183 "superblock": true, 00:15:14.183 "num_base_bdevs": 3, 00:15:14.183 "num_base_bdevs_discovered": 3, 00:15:14.183 "num_base_bdevs_operational": 3, 00:15:14.183 "base_bdevs_list": [ 00:15:14.183 { 00:15:14.183 "name": "NewBaseBdev", 00:15:14.183 "uuid": "a168f591-ef56-4a61-a437-77a5561deb52", 00:15:14.183 "is_configured": true, 00:15:14.183 "data_offset": 2048, 00:15:14.183 "data_size": 63488 00:15:14.183 }, 00:15:14.183 { 00:15:14.183 "name": "BaseBdev2", 00:15:14.183 "uuid": "cc388678-c9fa-48c6-84b1-7f55be493e4a", 00:15:14.183 "is_configured": true, 00:15:14.183 "data_offset": 2048, 00:15:14.183 "data_size": 63488 00:15:14.183 }, 00:15:14.183 { 00:15:14.183 "name": "BaseBdev3", 00:15:14.184 "uuid": "63dbbca0-ed8e-485e-aad7-f2a275a809dc", 00:15:14.184 "is_configured": true, 00:15:14.184 "data_offset": 2048, 00:15:14.184 "data_size": 63488 00:15:14.184 } 00:15:14.184 ] 00:15:14.184 } 00:15:14.184 } 00:15:14.184 }' 00:15:14.184 13:35:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:14.184 13:35:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:15:14.184 BaseBdev2 00:15:14.184 BaseBdev3' 00:15:14.184 13:35:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:14.184 13:35:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:14.184 13:35:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:14.184 13:35:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:15:14.184 13:35:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:14.184 13:35:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.184 13:35:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:14.184 13:35:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.184 13:35:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:14.184 13:35:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:14.184 13:35:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:14.184 13:35:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:14.184 13:35:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:14.184 13:35:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.184 13:35:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:14.443 13:35:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.443 13:35:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:14.443 13:35:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:14.443 13:35:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:14.443 13:35:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:14.443 13:35:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:14.444 13:35:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.444 13:35:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:14.444 13:35:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.444 13:35:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:14.444 13:35:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:14.444 13:35:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:14.444 13:35:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.444 13:35:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:14.444 [2024-11-20 13:35:13.737941] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:14.444 [2024-11-20 13:35:13.737981] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:14.444 [2024-11-20 13:35:13.738080] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:14.444 [2024-11-20 13:35:13.738379] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:14.444 [2024-11-20 13:35:13.738401] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:15:14.444 13:35:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.444 13:35:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 67783 00:15:14.444 13:35:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 67783 ']' 00:15:14.444 13:35:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 67783 00:15:14.444 13:35:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:15:14.444 13:35:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:14.444 13:35:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67783 00:15:14.444 13:35:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:14.444 killing process with pid 67783 00:15:14.444 13:35:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:14.444 13:35:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67783' 00:15:14.444 13:35:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 67783 00:15:14.444 [2024-11-20 13:35:13.777614] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:14.444 13:35:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 67783 00:15:14.703 [2024-11-20 13:35:14.083979] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:16.079 13:35:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:15:16.079 00:15:16.079 real 0m10.337s 00:15:16.079 user 0m16.322s 00:15:16.079 sys 0m2.102s 00:15:16.079 13:35:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:16.079 13:35:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:16.079 ************************************ 00:15:16.079 END TEST raid_state_function_test_sb 00:15:16.079 ************************************ 00:15:16.079 13:35:15 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 3 00:15:16.079 13:35:15 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:15:16.079 13:35:15 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:16.079 13:35:15 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:16.079 ************************************ 00:15:16.079 START TEST raid_superblock_test 00:15:16.079 ************************************ 00:15:16.079 13:35:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 3 00:15:16.079 13:35:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:15:16.079 13:35:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:15:16.079 13:35:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:15:16.079 13:35:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:15:16.079 13:35:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:15:16.079 13:35:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:15:16.079 13:35:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:15:16.079 13:35:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:15:16.079 13:35:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:15:16.079 13:35:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:15:16.079 13:35:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:15:16.079 13:35:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:15:16.079 13:35:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:15:16.079 13:35:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:15:16.079 13:35:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:15:16.079 13:35:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=68404 00:15:16.079 13:35:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:15:16.079 13:35:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 68404 00:15:16.079 13:35:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 68404 ']' 00:15:16.079 13:35:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:16.079 13:35:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:16.079 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:16.079 13:35:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:16.079 13:35:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:16.079 13:35:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.079 [2024-11-20 13:35:15.404920] Starting SPDK v25.01-pre git sha1 82b85d9ca / DPDK 24.03.0 initialization... 00:15:16.079 [2024-11-20 13:35:15.405049] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68404 ] 00:15:16.387 [2024-11-20 13:35:15.583780] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:16.387 [2024-11-20 13:35:15.692987] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:16.662 [2024-11-20 13:35:15.876626] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:16.662 [2024-11-20 13:35:15.876699] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:16.921 13:35:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:16.921 13:35:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:15:16.921 13:35:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:15:16.921 13:35:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:16.921 13:35:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:15:16.921 13:35:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:15:16.921 13:35:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:15:16.921 13:35:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:16.921 13:35:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:16.921 13:35:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:16.921 13:35:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:15:16.921 13:35:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.921 13:35:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.921 malloc1 00:15:16.921 13:35:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.921 13:35:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:16.921 13:35:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.921 13:35:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.921 [2024-11-20 13:35:16.340177] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:16.921 [2024-11-20 13:35:16.340402] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:16.921 [2024-11-20 13:35:16.340470] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:16.921 [2024-11-20 13:35:16.340559] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:16.921 [2024-11-20 13:35:16.343116] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:16.921 [2024-11-20 13:35:16.343273] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:16.921 pt1 00:15:16.921 13:35:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.921 13:35:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:16.921 13:35:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:16.921 13:35:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:15:16.921 13:35:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:15:16.921 13:35:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:15:16.921 13:35:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:16.921 13:35:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:16.921 13:35:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:16.921 13:35:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:15:16.921 13:35:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.921 13:35:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.921 malloc2 00:15:16.922 13:35:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.922 13:35:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:16.922 13:35:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.922 13:35:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.922 [2024-11-20 13:35:16.397403] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:16.922 [2024-11-20 13:35:16.397472] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:16.922 [2024-11-20 13:35:16.397506] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:16.922 [2024-11-20 13:35:16.397521] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:16.922 [2024-11-20 13:35:16.400024] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:16.922 [2024-11-20 13:35:16.400201] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:16.922 pt2 00:15:16.922 13:35:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.922 13:35:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:16.922 13:35:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:16.922 13:35:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:15:16.922 13:35:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:15:17.182 13:35:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:15:17.182 13:35:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:17.182 13:35:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:17.182 13:35:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:17.182 13:35:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:15:17.182 13:35:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.182 13:35:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.182 malloc3 00:15:17.182 13:35:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.182 13:35:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:17.182 13:35:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.182 13:35:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.182 [2024-11-20 13:35:16.467642] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:17.182 [2024-11-20 13:35:16.467818] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:17.182 [2024-11-20 13:35:16.467882] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:15:17.182 [2024-11-20 13:35:16.467974] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:17.182 [2024-11-20 13:35:16.470362] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:17.182 [2024-11-20 13:35:16.470406] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:17.182 pt3 00:15:17.182 13:35:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.182 13:35:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:17.182 13:35:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:17.182 13:35:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:15:17.182 13:35:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.182 13:35:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.182 [2024-11-20 13:35:16.479676] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:17.182 [2024-11-20 13:35:16.481760] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:17.182 [2024-11-20 13:35:16.481835] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:17.182 [2024-11-20 13:35:16.482003] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:15:17.182 [2024-11-20 13:35:16.482025] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:17.182 [2024-11-20 13:35:16.482315] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:15:17.182 [2024-11-20 13:35:16.482507] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:15:17.182 [2024-11-20 13:35:16.482523] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:15:17.182 [2024-11-20 13:35:16.482677] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:17.182 13:35:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.182 13:35:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:17.182 13:35:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:17.182 13:35:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:17.182 13:35:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:17.182 13:35:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:17.182 13:35:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:17.182 13:35:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:17.182 13:35:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:17.182 13:35:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:17.182 13:35:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:17.182 13:35:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:17.182 13:35:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:17.182 13:35:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.182 13:35:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.182 13:35:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.182 13:35:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:17.182 "name": "raid_bdev1", 00:15:17.182 "uuid": "82344784-f727-437b-82df-ae116a90ed5d", 00:15:17.182 "strip_size_kb": 0, 00:15:17.182 "state": "online", 00:15:17.182 "raid_level": "raid1", 00:15:17.182 "superblock": true, 00:15:17.182 "num_base_bdevs": 3, 00:15:17.182 "num_base_bdevs_discovered": 3, 00:15:17.182 "num_base_bdevs_operational": 3, 00:15:17.182 "base_bdevs_list": [ 00:15:17.182 { 00:15:17.182 "name": "pt1", 00:15:17.182 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:17.182 "is_configured": true, 00:15:17.182 "data_offset": 2048, 00:15:17.182 "data_size": 63488 00:15:17.182 }, 00:15:17.182 { 00:15:17.182 "name": "pt2", 00:15:17.182 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:17.182 "is_configured": true, 00:15:17.182 "data_offset": 2048, 00:15:17.182 "data_size": 63488 00:15:17.182 }, 00:15:17.182 { 00:15:17.182 "name": "pt3", 00:15:17.182 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:17.182 "is_configured": true, 00:15:17.182 "data_offset": 2048, 00:15:17.182 "data_size": 63488 00:15:17.182 } 00:15:17.182 ] 00:15:17.182 }' 00:15:17.182 13:35:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:17.182 13:35:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.440 13:35:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:15:17.440 13:35:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:15:17.441 13:35:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:17.441 13:35:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:17.441 13:35:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:17.441 13:35:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:17.441 13:35:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:17.441 13:35:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.441 13:35:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.441 13:35:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:17.699 [2024-11-20 13:35:16.927363] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:17.699 13:35:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.699 13:35:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:17.699 "name": "raid_bdev1", 00:15:17.699 "aliases": [ 00:15:17.699 "82344784-f727-437b-82df-ae116a90ed5d" 00:15:17.699 ], 00:15:17.699 "product_name": "Raid Volume", 00:15:17.699 "block_size": 512, 00:15:17.699 "num_blocks": 63488, 00:15:17.699 "uuid": "82344784-f727-437b-82df-ae116a90ed5d", 00:15:17.699 "assigned_rate_limits": { 00:15:17.699 "rw_ios_per_sec": 0, 00:15:17.699 "rw_mbytes_per_sec": 0, 00:15:17.699 "r_mbytes_per_sec": 0, 00:15:17.699 "w_mbytes_per_sec": 0 00:15:17.699 }, 00:15:17.699 "claimed": false, 00:15:17.699 "zoned": false, 00:15:17.699 "supported_io_types": { 00:15:17.699 "read": true, 00:15:17.699 "write": true, 00:15:17.699 "unmap": false, 00:15:17.699 "flush": false, 00:15:17.699 "reset": true, 00:15:17.699 "nvme_admin": false, 00:15:17.699 "nvme_io": false, 00:15:17.699 "nvme_io_md": false, 00:15:17.699 "write_zeroes": true, 00:15:17.699 "zcopy": false, 00:15:17.699 "get_zone_info": false, 00:15:17.699 "zone_management": false, 00:15:17.699 "zone_append": false, 00:15:17.699 "compare": false, 00:15:17.699 "compare_and_write": false, 00:15:17.699 "abort": false, 00:15:17.699 "seek_hole": false, 00:15:17.699 "seek_data": false, 00:15:17.699 "copy": false, 00:15:17.699 "nvme_iov_md": false 00:15:17.699 }, 00:15:17.699 "memory_domains": [ 00:15:17.699 { 00:15:17.699 "dma_device_id": "system", 00:15:17.699 "dma_device_type": 1 00:15:17.699 }, 00:15:17.699 { 00:15:17.699 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:17.699 "dma_device_type": 2 00:15:17.699 }, 00:15:17.699 { 00:15:17.699 "dma_device_id": "system", 00:15:17.699 "dma_device_type": 1 00:15:17.699 }, 00:15:17.699 { 00:15:17.699 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:17.699 "dma_device_type": 2 00:15:17.699 }, 00:15:17.699 { 00:15:17.699 "dma_device_id": "system", 00:15:17.699 "dma_device_type": 1 00:15:17.699 }, 00:15:17.699 { 00:15:17.699 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:17.699 "dma_device_type": 2 00:15:17.699 } 00:15:17.699 ], 00:15:17.699 "driver_specific": { 00:15:17.699 "raid": { 00:15:17.699 "uuid": "82344784-f727-437b-82df-ae116a90ed5d", 00:15:17.699 "strip_size_kb": 0, 00:15:17.699 "state": "online", 00:15:17.699 "raid_level": "raid1", 00:15:17.699 "superblock": true, 00:15:17.699 "num_base_bdevs": 3, 00:15:17.699 "num_base_bdevs_discovered": 3, 00:15:17.699 "num_base_bdevs_operational": 3, 00:15:17.699 "base_bdevs_list": [ 00:15:17.699 { 00:15:17.699 "name": "pt1", 00:15:17.699 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:17.699 "is_configured": true, 00:15:17.699 "data_offset": 2048, 00:15:17.699 "data_size": 63488 00:15:17.699 }, 00:15:17.699 { 00:15:17.699 "name": "pt2", 00:15:17.699 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:17.699 "is_configured": true, 00:15:17.699 "data_offset": 2048, 00:15:17.699 "data_size": 63488 00:15:17.699 }, 00:15:17.699 { 00:15:17.699 "name": "pt3", 00:15:17.699 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:17.699 "is_configured": true, 00:15:17.699 "data_offset": 2048, 00:15:17.699 "data_size": 63488 00:15:17.699 } 00:15:17.699 ] 00:15:17.699 } 00:15:17.699 } 00:15:17.699 }' 00:15:17.699 13:35:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:17.699 13:35:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:15:17.699 pt2 00:15:17.699 pt3' 00:15:17.699 13:35:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:17.699 13:35:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:17.700 13:35:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:17.700 13:35:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:17.700 13:35:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:15:17.700 13:35:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.700 13:35:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.700 13:35:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.700 13:35:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:17.700 13:35:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:17.700 13:35:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:17.700 13:35:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:15:17.700 13:35:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.700 13:35:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:17.700 13:35:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.700 13:35:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.700 13:35:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:17.700 13:35:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:17.700 13:35:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:17.700 13:35:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:15:17.700 13:35:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.700 13:35:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:17.700 13:35:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.959 13:35:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.959 13:35:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:17.959 13:35:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:17.959 13:35:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:15:17.959 13:35:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:17.959 13:35:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.959 13:35:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.959 [2024-11-20 13:35:17.210875] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:17.959 13:35:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.959 13:35:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=82344784-f727-437b-82df-ae116a90ed5d 00:15:17.959 13:35:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 82344784-f727-437b-82df-ae116a90ed5d ']' 00:15:17.959 13:35:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:17.959 13:35:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.959 13:35:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.959 [2024-11-20 13:35:17.254548] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:17.959 [2024-11-20 13:35:17.254711] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:17.959 [2024-11-20 13:35:17.254822] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:17.959 [2024-11-20 13:35:17.254906] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:17.959 [2024-11-20 13:35:17.254920] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:15:17.959 13:35:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.959 13:35:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:17.959 13:35:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.959 13:35:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:15:17.959 13:35:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.959 13:35:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.960 13:35:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:15:17.960 13:35:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:15:17.960 13:35:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:17.960 13:35:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:15:17.960 13:35:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.960 13:35:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.960 13:35:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.960 13:35:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:17.960 13:35:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:15:17.960 13:35:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.960 13:35:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.960 13:35:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.960 13:35:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:17.960 13:35:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:15:17.960 13:35:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.960 13:35:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.960 13:35:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.960 13:35:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:15:17.960 13:35:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.960 13:35:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:15:17.960 13:35:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.960 13:35:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.960 13:35:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:15:17.960 13:35:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:15:17.960 13:35:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:15:17.960 13:35:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:15:17.960 13:35:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:15:17.960 13:35:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:17.960 13:35:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:15:17.960 13:35:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:17.960 13:35:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:15:17.960 13:35:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.960 13:35:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.960 [2024-11-20 13:35:17.414544] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:15:17.960 [2024-11-20 13:35:17.416662] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:15:17.960 [2024-11-20 13:35:17.416728] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:15:17.960 [2024-11-20 13:35:17.416785] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:15:17.960 [2024-11-20 13:35:17.416849] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:15:17.960 [2024-11-20 13:35:17.416875] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:15:17.960 [2024-11-20 13:35:17.416899] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:17.960 [2024-11-20 13:35:17.416911] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:15:17.960 request: 00:15:17.960 { 00:15:17.960 "name": "raid_bdev1", 00:15:17.960 "raid_level": "raid1", 00:15:17.960 "base_bdevs": [ 00:15:17.960 "malloc1", 00:15:17.960 "malloc2", 00:15:17.960 "malloc3" 00:15:17.960 ], 00:15:17.960 "superblock": false, 00:15:17.960 "method": "bdev_raid_create", 00:15:17.960 "req_id": 1 00:15:17.960 } 00:15:17.960 Got JSON-RPC error response 00:15:17.960 response: 00:15:17.960 { 00:15:17.960 "code": -17, 00:15:17.960 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:15:17.960 } 00:15:17.960 13:35:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:15:17.960 13:35:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:15:17.960 13:35:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:17.960 13:35:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:17.960 13:35:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:17.960 13:35:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:17.960 13:35:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:15:17.960 13:35:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.960 13:35:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.960 13:35:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.218 13:35:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:15:18.218 13:35:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:15:18.218 13:35:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:18.218 13:35:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.218 13:35:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.218 [2024-11-20 13:35:17.478457] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:18.218 [2024-11-20 13:35:17.478524] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:18.218 [2024-11-20 13:35:17.478551] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:15:18.218 [2024-11-20 13:35:17.478578] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:18.218 [2024-11-20 13:35:17.481075] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:18.218 [2024-11-20 13:35:17.481116] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:18.218 [2024-11-20 13:35:17.481210] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:15:18.218 [2024-11-20 13:35:17.481266] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:18.218 pt1 00:15:18.218 13:35:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.218 13:35:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:15:18.218 13:35:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:18.218 13:35:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:18.218 13:35:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:18.218 13:35:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:18.218 13:35:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:18.218 13:35:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:18.218 13:35:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:18.218 13:35:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:18.218 13:35:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:18.218 13:35:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:18.218 13:35:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:18.218 13:35:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.218 13:35:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.218 13:35:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.218 13:35:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:18.218 "name": "raid_bdev1", 00:15:18.218 "uuid": "82344784-f727-437b-82df-ae116a90ed5d", 00:15:18.218 "strip_size_kb": 0, 00:15:18.218 "state": "configuring", 00:15:18.218 "raid_level": "raid1", 00:15:18.218 "superblock": true, 00:15:18.218 "num_base_bdevs": 3, 00:15:18.218 "num_base_bdevs_discovered": 1, 00:15:18.218 "num_base_bdevs_operational": 3, 00:15:18.218 "base_bdevs_list": [ 00:15:18.218 { 00:15:18.218 "name": "pt1", 00:15:18.218 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:18.218 "is_configured": true, 00:15:18.218 "data_offset": 2048, 00:15:18.218 "data_size": 63488 00:15:18.218 }, 00:15:18.218 { 00:15:18.218 "name": null, 00:15:18.219 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:18.219 "is_configured": false, 00:15:18.219 "data_offset": 2048, 00:15:18.219 "data_size": 63488 00:15:18.219 }, 00:15:18.219 { 00:15:18.219 "name": null, 00:15:18.219 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:18.219 "is_configured": false, 00:15:18.219 "data_offset": 2048, 00:15:18.219 "data_size": 63488 00:15:18.219 } 00:15:18.219 ] 00:15:18.219 }' 00:15:18.219 13:35:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:18.219 13:35:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.478 13:35:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:15:18.478 13:35:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:18.478 13:35:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.478 13:35:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.478 [2024-11-20 13:35:17.910468] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:18.478 [2024-11-20 13:35:17.910674] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:18.478 [2024-11-20 13:35:17.910742] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:15:18.478 [2024-11-20 13:35:17.910840] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:18.478 [2024-11-20 13:35:17.911406] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:18.478 [2024-11-20 13:35:17.911551] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:18.478 [2024-11-20 13:35:17.911755] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:18.478 [2024-11-20 13:35:17.911882] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:18.478 pt2 00:15:18.478 13:35:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.478 13:35:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:15:18.478 13:35:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.478 13:35:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.478 [2024-11-20 13:35:17.922467] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:15:18.478 13:35:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.478 13:35:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:15:18.478 13:35:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:18.478 13:35:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:18.478 13:35:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:18.478 13:35:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:18.478 13:35:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:18.478 13:35:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:18.478 13:35:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:18.478 13:35:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:18.478 13:35:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:18.478 13:35:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:18.478 13:35:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.478 13:35:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.478 13:35:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:18.478 13:35:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.738 13:35:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:18.738 "name": "raid_bdev1", 00:15:18.738 "uuid": "82344784-f727-437b-82df-ae116a90ed5d", 00:15:18.738 "strip_size_kb": 0, 00:15:18.738 "state": "configuring", 00:15:18.738 "raid_level": "raid1", 00:15:18.738 "superblock": true, 00:15:18.738 "num_base_bdevs": 3, 00:15:18.738 "num_base_bdevs_discovered": 1, 00:15:18.738 "num_base_bdevs_operational": 3, 00:15:18.738 "base_bdevs_list": [ 00:15:18.738 { 00:15:18.738 "name": "pt1", 00:15:18.738 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:18.738 "is_configured": true, 00:15:18.738 "data_offset": 2048, 00:15:18.738 "data_size": 63488 00:15:18.738 }, 00:15:18.738 { 00:15:18.738 "name": null, 00:15:18.738 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:18.738 "is_configured": false, 00:15:18.738 "data_offset": 0, 00:15:18.738 "data_size": 63488 00:15:18.738 }, 00:15:18.738 { 00:15:18.738 "name": null, 00:15:18.738 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:18.738 "is_configured": false, 00:15:18.738 "data_offset": 2048, 00:15:18.738 "data_size": 63488 00:15:18.738 } 00:15:18.738 ] 00:15:18.738 }' 00:15:18.738 13:35:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:18.738 13:35:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.996 13:35:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:15:18.996 13:35:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:18.996 13:35:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:18.996 13:35:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.996 13:35:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.996 [2024-11-20 13:35:18.350459] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:18.996 [2024-11-20 13:35:18.350691] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:18.996 [2024-11-20 13:35:18.350725] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:15:18.996 [2024-11-20 13:35:18.350743] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:18.996 [2024-11-20 13:35:18.351259] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:18.996 [2024-11-20 13:35:18.351288] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:18.997 [2024-11-20 13:35:18.351383] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:18.997 [2024-11-20 13:35:18.351426] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:18.997 pt2 00:15:18.997 13:35:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.997 13:35:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:15:18.997 13:35:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:18.997 13:35:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:18.997 13:35:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.997 13:35:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.997 [2024-11-20 13:35:18.362445] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:18.997 [2024-11-20 13:35:18.362515] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:18.997 [2024-11-20 13:35:18.362537] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:15:18.997 [2024-11-20 13:35:18.362555] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:18.997 [2024-11-20 13:35:18.363026] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:18.997 [2024-11-20 13:35:18.363054] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:18.997 [2024-11-20 13:35:18.363165] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:15:18.997 [2024-11-20 13:35:18.363195] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:18.997 [2024-11-20 13:35:18.363364] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:15:18.997 [2024-11-20 13:35:18.363394] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:18.997 [2024-11-20 13:35:18.363665] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:15:18.997 [2024-11-20 13:35:18.363833] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:15:18.997 [2024-11-20 13:35:18.363844] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:15:18.997 [2024-11-20 13:35:18.364001] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:18.997 pt3 00:15:18.997 13:35:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.997 13:35:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:15:18.997 13:35:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:18.997 13:35:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:18.997 13:35:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:18.997 13:35:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:18.997 13:35:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:18.997 13:35:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:18.997 13:35:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:18.997 13:35:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:18.997 13:35:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:18.997 13:35:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:18.997 13:35:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:18.997 13:35:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:18.997 13:35:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:18.997 13:35:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.997 13:35:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.997 13:35:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.997 13:35:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:18.997 "name": "raid_bdev1", 00:15:18.997 "uuid": "82344784-f727-437b-82df-ae116a90ed5d", 00:15:18.997 "strip_size_kb": 0, 00:15:18.997 "state": "online", 00:15:18.997 "raid_level": "raid1", 00:15:18.997 "superblock": true, 00:15:18.997 "num_base_bdevs": 3, 00:15:18.997 "num_base_bdevs_discovered": 3, 00:15:18.997 "num_base_bdevs_operational": 3, 00:15:18.997 "base_bdevs_list": [ 00:15:18.997 { 00:15:18.997 "name": "pt1", 00:15:18.997 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:18.997 "is_configured": true, 00:15:18.997 "data_offset": 2048, 00:15:18.997 "data_size": 63488 00:15:18.997 }, 00:15:18.997 { 00:15:18.997 "name": "pt2", 00:15:18.997 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:18.997 "is_configured": true, 00:15:18.997 "data_offset": 2048, 00:15:18.997 "data_size": 63488 00:15:18.997 }, 00:15:18.997 { 00:15:18.997 "name": "pt3", 00:15:18.997 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:18.997 "is_configured": true, 00:15:18.997 "data_offset": 2048, 00:15:18.997 "data_size": 63488 00:15:18.997 } 00:15:18.997 ] 00:15:18.997 }' 00:15:18.997 13:35:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:18.997 13:35:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.564 13:35:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:15:19.564 13:35:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:15:19.564 13:35:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:19.564 13:35:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:19.564 13:35:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:19.564 13:35:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:19.564 13:35:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:19.564 13:35:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.565 13:35:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:19.565 13:35:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.565 [2024-11-20 13:35:18.842751] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:19.565 13:35:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.565 13:35:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:19.565 "name": "raid_bdev1", 00:15:19.565 "aliases": [ 00:15:19.565 "82344784-f727-437b-82df-ae116a90ed5d" 00:15:19.565 ], 00:15:19.565 "product_name": "Raid Volume", 00:15:19.565 "block_size": 512, 00:15:19.565 "num_blocks": 63488, 00:15:19.565 "uuid": "82344784-f727-437b-82df-ae116a90ed5d", 00:15:19.565 "assigned_rate_limits": { 00:15:19.565 "rw_ios_per_sec": 0, 00:15:19.565 "rw_mbytes_per_sec": 0, 00:15:19.565 "r_mbytes_per_sec": 0, 00:15:19.565 "w_mbytes_per_sec": 0 00:15:19.565 }, 00:15:19.565 "claimed": false, 00:15:19.565 "zoned": false, 00:15:19.565 "supported_io_types": { 00:15:19.565 "read": true, 00:15:19.565 "write": true, 00:15:19.565 "unmap": false, 00:15:19.565 "flush": false, 00:15:19.565 "reset": true, 00:15:19.565 "nvme_admin": false, 00:15:19.565 "nvme_io": false, 00:15:19.565 "nvme_io_md": false, 00:15:19.565 "write_zeroes": true, 00:15:19.565 "zcopy": false, 00:15:19.565 "get_zone_info": false, 00:15:19.565 "zone_management": false, 00:15:19.565 "zone_append": false, 00:15:19.565 "compare": false, 00:15:19.565 "compare_and_write": false, 00:15:19.565 "abort": false, 00:15:19.565 "seek_hole": false, 00:15:19.565 "seek_data": false, 00:15:19.565 "copy": false, 00:15:19.565 "nvme_iov_md": false 00:15:19.565 }, 00:15:19.565 "memory_domains": [ 00:15:19.565 { 00:15:19.565 "dma_device_id": "system", 00:15:19.565 "dma_device_type": 1 00:15:19.565 }, 00:15:19.565 { 00:15:19.565 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:19.565 "dma_device_type": 2 00:15:19.565 }, 00:15:19.565 { 00:15:19.565 "dma_device_id": "system", 00:15:19.565 "dma_device_type": 1 00:15:19.565 }, 00:15:19.565 { 00:15:19.565 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:19.565 "dma_device_type": 2 00:15:19.565 }, 00:15:19.565 { 00:15:19.565 "dma_device_id": "system", 00:15:19.565 "dma_device_type": 1 00:15:19.565 }, 00:15:19.565 { 00:15:19.565 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:19.565 "dma_device_type": 2 00:15:19.565 } 00:15:19.565 ], 00:15:19.565 "driver_specific": { 00:15:19.565 "raid": { 00:15:19.565 "uuid": "82344784-f727-437b-82df-ae116a90ed5d", 00:15:19.565 "strip_size_kb": 0, 00:15:19.565 "state": "online", 00:15:19.565 "raid_level": "raid1", 00:15:19.565 "superblock": true, 00:15:19.565 "num_base_bdevs": 3, 00:15:19.565 "num_base_bdevs_discovered": 3, 00:15:19.565 "num_base_bdevs_operational": 3, 00:15:19.565 "base_bdevs_list": [ 00:15:19.565 { 00:15:19.565 "name": "pt1", 00:15:19.565 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:19.565 "is_configured": true, 00:15:19.565 "data_offset": 2048, 00:15:19.565 "data_size": 63488 00:15:19.565 }, 00:15:19.565 { 00:15:19.565 "name": "pt2", 00:15:19.565 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:19.565 "is_configured": true, 00:15:19.565 "data_offset": 2048, 00:15:19.565 "data_size": 63488 00:15:19.565 }, 00:15:19.565 { 00:15:19.565 "name": "pt3", 00:15:19.565 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:19.565 "is_configured": true, 00:15:19.565 "data_offset": 2048, 00:15:19.565 "data_size": 63488 00:15:19.565 } 00:15:19.565 ] 00:15:19.565 } 00:15:19.565 } 00:15:19.565 }' 00:15:19.565 13:35:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:19.565 13:35:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:15:19.565 pt2 00:15:19.565 pt3' 00:15:19.565 13:35:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:19.565 13:35:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:19.565 13:35:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:19.565 13:35:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:15:19.565 13:35:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.565 13:35:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.565 13:35:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:19.565 13:35:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.565 13:35:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:19.565 13:35:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:19.565 13:35:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:19.565 13:35:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:15:19.565 13:35:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.565 13:35:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.565 13:35:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:19.565 13:35:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.824 13:35:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:19.824 13:35:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:19.824 13:35:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:19.824 13:35:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:15:19.824 13:35:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:19.824 13:35:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.824 13:35:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.824 13:35:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.824 13:35:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:19.824 13:35:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:19.824 13:35:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:19.824 13:35:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:15:19.824 13:35:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.824 13:35:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.824 [2024-11-20 13:35:19.102727] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:19.824 13:35:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.824 13:35:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 82344784-f727-437b-82df-ae116a90ed5d '!=' 82344784-f727-437b-82df-ae116a90ed5d ']' 00:15:19.824 13:35:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:15:19.824 13:35:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:19.824 13:35:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:15:19.824 13:35:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:15:19.824 13:35:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.824 13:35:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.824 [2024-11-20 13:35:19.146488] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:15:19.824 13:35:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.824 13:35:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:19.824 13:35:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:19.824 13:35:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:19.824 13:35:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:19.824 13:35:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:19.824 13:35:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:19.824 13:35:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:19.824 13:35:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:19.824 13:35:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:19.824 13:35:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:19.824 13:35:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:19.824 13:35:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:19.824 13:35:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.824 13:35:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.824 13:35:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.824 13:35:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:19.824 "name": "raid_bdev1", 00:15:19.824 "uuid": "82344784-f727-437b-82df-ae116a90ed5d", 00:15:19.824 "strip_size_kb": 0, 00:15:19.824 "state": "online", 00:15:19.824 "raid_level": "raid1", 00:15:19.824 "superblock": true, 00:15:19.824 "num_base_bdevs": 3, 00:15:19.824 "num_base_bdevs_discovered": 2, 00:15:19.824 "num_base_bdevs_operational": 2, 00:15:19.824 "base_bdevs_list": [ 00:15:19.824 { 00:15:19.824 "name": null, 00:15:19.824 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:19.824 "is_configured": false, 00:15:19.824 "data_offset": 0, 00:15:19.824 "data_size": 63488 00:15:19.824 }, 00:15:19.824 { 00:15:19.824 "name": "pt2", 00:15:19.824 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:19.824 "is_configured": true, 00:15:19.824 "data_offset": 2048, 00:15:19.825 "data_size": 63488 00:15:19.825 }, 00:15:19.825 { 00:15:19.825 "name": "pt3", 00:15:19.825 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:19.825 "is_configured": true, 00:15:19.825 "data_offset": 2048, 00:15:19.825 "data_size": 63488 00:15:19.825 } 00:15:19.825 ] 00:15:19.825 }' 00:15:19.825 13:35:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:19.825 13:35:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:20.392 13:35:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:20.392 13:35:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.392 13:35:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:20.392 [2024-11-20 13:35:19.602446] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:20.392 [2024-11-20 13:35:19.602481] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:20.392 [2024-11-20 13:35:19.602565] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:20.392 [2024-11-20 13:35:19.602627] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:20.392 [2024-11-20 13:35:19.602647] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:15:20.392 13:35:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.392 13:35:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:20.392 13:35:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.392 13:35:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:15:20.392 13:35:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:20.392 13:35:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.392 13:35:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:15:20.392 13:35:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:15:20.392 13:35:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:15:20.392 13:35:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:20.392 13:35:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:15:20.392 13:35:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.392 13:35:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:20.392 13:35:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.392 13:35:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:15:20.392 13:35:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:20.392 13:35:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:15:20.392 13:35:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.392 13:35:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:20.392 13:35:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.392 13:35:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:15:20.392 13:35:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:20.392 13:35:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:15:20.392 13:35:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:15:20.392 13:35:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:20.392 13:35:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.392 13:35:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:20.392 [2024-11-20 13:35:19.682351] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:20.392 [2024-11-20 13:35:19.682417] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:20.392 [2024-11-20 13:35:19.682439] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:15:20.392 [2024-11-20 13:35:19.682455] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:20.392 [2024-11-20 13:35:19.685101] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:20.392 [2024-11-20 13:35:19.685153] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:20.392 [2024-11-20 13:35:19.685243] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:20.392 [2024-11-20 13:35:19.685300] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:20.392 pt2 00:15:20.392 13:35:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.392 13:35:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:15:20.392 13:35:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:20.392 13:35:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:20.392 13:35:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:20.392 13:35:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:20.392 13:35:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:20.392 13:35:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:20.392 13:35:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:20.392 13:35:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:20.392 13:35:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:20.392 13:35:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:20.392 13:35:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.392 13:35:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:20.392 13:35:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:20.392 13:35:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.392 13:35:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:20.392 "name": "raid_bdev1", 00:15:20.392 "uuid": "82344784-f727-437b-82df-ae116a90ed5d", 00:15:20.392 "strip_size_kb": 0, 00:15:20.392 "state": "configuring", 00:15:20.392 "raid_level": "raid1", 00:15:20.392 "superblock": true, 00:15:20.392 "num_base_bdevs": 3, 00:15:20.392 "num_base_bdevs_discovered": 1, 00:15:20.392 "num_base_bdevs_operational": 2, 00:15:20.392 "base_bdevs_list": [ 00:15:20.392 { 00:15:20.392 "name": null, 00:15:20.392 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:20.392 "is_configured": false, 00:15:20.392 "data_offset": 2048, 00:15:20.392 "data_size": 63488 00:15:20.392 }, 00:15:20.392 { 00:15:20.392 "name": "pt2", 00:15:20.392 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:20.392 "is_configured": true, 00:15:20.392 "data_offset": 2048, 00:15:20.392 "data_size": 63488 00:15:20.392 }, 00:15:20.392 { 00:15:20.392 "name": null, 00:15:20.392 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:20.392 "is_configured": false, 00:15:20.392 "data_offset": 2048, 00:15:20.392 "data_size": 63488 00:15:20.392 } 00:15:20.392 ] 00:15:20.392 }' 00:15:20.392 13:35:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:20.392 13:35:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:20.651 13:35:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:15:20.651 13:35:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:15:20.651 13:35:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:15:20.651 13:35:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:20.651 13:35:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.651 13:35:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:20.651 [2024-11-20 13:35:20.102070] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:20.651 [2024-11-20 13:35:20.102143] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:20.651 [2024-11-20 13:35:20.102168] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:15:20.651 [2024-11-20 13:35:20.102184] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:20.651 [2024-11-20 13:35:20.102675] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:20.651 [2024-11-20 13:35:20.102701] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:20.651 [2024-11-20 13:35:20.102798] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:15:20.651 [2024-11-20 13:35:20.102830] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:20.651 [2024-11-20 13:35:20.102963] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:15:20.651 [2024-11-20 13:35:20.102978] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:20.651 [2024-11-20 13:35:20.103279] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:15:20.651 [2024-11-20 13:35:20.103447] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:15:20.651 [2024-11-20 13:35:20.103459] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:15:20.651 [2024-11-20 13:35:20.103618] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:20.651 pt3 00:15:20.651 13:35:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.651 13:35:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:20.651 13:35:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:20.651 13:35:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:20.651 13:35:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:20.651 13:35:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:20.651 13:35:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:20.651 13:35:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:20.651 13:35:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:20.651 13:35:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:20.651 13:35:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:20.651 13:35:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:20.651 13:35:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:20.651 13:35:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.651 13:35:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:20.908 13:35:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.908 13:35:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:20.908 "name": "raid_bdev1", 00:15:20.908 "uuid": "82344784-f727-437b-82df-ae116a90ed5d", 00:15:20.908 "strip_size_kb": 0, 00:15:20.908 "state": "online", 00:15:20.908 "raid_level": "raid1", 00:15:20.908 "superblock": true, 00:15:20.908 "num_base_bdevs": 3, 00:15:20.908 "num_base_bdevs_discovered": 2, 00:15:20.908 "num_base_bdevs_operational": 2, 00:15:20.908 "base_bdevs_list": [ 00:15:20.908 { 00:15:20.908 "name": null, 00:15:20.908 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:20.908 "is_configured": false, 00:15:20.908 "data_offset": 2048, 00:15:20.908 "data_size": 63488 00:15:20.908 }, 00:15:20.908 { 00:15:20.908 "name": "pt2", 00:15:20.908 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:20.908 "is_configured": true, 00:15:20.908 "data_offset": 2048, 00:15:20.908 "data_size": 63488 00:15:20.908 }, 00:15:20.908 { 00:15:20.908 "name": "pt3", 00:15:20.908 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:20.908 "is_configured": true, 00:15:20.908 "data_offset": 2048, 00:15:20.908 "data_size": 63488 00:15:20.908 } 00:15:20.908 ] 00:15:20.908 }' 00:15:20.908 13:35:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:20.908 13:35:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:21.201 13:35:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:21.201 13:35:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.201 13:35:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:21.201 [2024-11-20 13:35:20.553389] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:21.201 [2024-11-20 13:35:20.553427] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:21.201 [2024-11-20 13:35:20.553514] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:21.201 [2024-11-20 13:35:20.553583] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:21.201 [2024-11-20 13:35:20.553596] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:15:21.201 13:35:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.201 13:35:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:15:21.201 13:35:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:21.201 13:35:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.201 13:35:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:21.201 13:35:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.201 13:35:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:15:21.201 13:35:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:15:21.201 13:35:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:15:21.201 13:35:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:15:21.201 13:35:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 00:15:21.201 13:35:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.201 13:35:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:21.201 13:35:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.201 13:35:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:21.201 13:35:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.201 13:35:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:21.201 [2024-11-20 13:35:20.621306] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:21.201 [2024-11-20 13:35:20.621377] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:21.201 [2024-11-20 13:35:20.621403] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:15:21.201 [2024-11-20 13:35:20.621416] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:21.201 [2024-11-20 13:35:20.623917] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:21.201 [2024-11-20 13:35:20.624107] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:21.201 [2024-11-20 13:35:20.624227] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:15:21.201 [2024-11-20 13:35:20.624281] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:21.201 [2024-11-20 13:35:20.624434] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:15:21.201 [2024-11-20 13:35:20.624447] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:21.201 [2024-11-20 13:35:20.624468] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:15:21.201 [2024-11-20 13:35:20.624533] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:21.201 pt1 00:15:21.201 13:35:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.201 13:35:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:15:21.201 13:35:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:15:21.201 13:35:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:21.201 13:35:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:21.201 13:35:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:21.201 13:35:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:21.202 13:35:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:21.202 13:35:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:21.202 13:35:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:21.202 13:35:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:21.202 13:35:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:21.202 13:35:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:21.202 13:35:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:21.202 13:35:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.202 13:35:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:21.202 13:35:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.202 13:35:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:21.202 "name": "raid_bdev1", 00:15:21.202 "uuid": "82344784-f727-437b-82df-ae116a90ed5d", 00:15:21.202 "strip_size_kb": 0, 00:15:21.202 "state": "configuring", 00:15:21.202 "raid_level": "raid1", 00:15:21.202 "superblock": true, 00:15:21.202 "num_base_bdevs": 3, 00:15:21.202 "num_base_bdevs_discovered": 1, 00:15:21.202 "num_base_bdevs_operational": 2, 00:15:21.202 "base_bdevs_list": [ 00:15:21.202 { 00:15:21.202 "name": null, 00:15:21.202 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:21.202 "is_configured": false, 00:15:21.202 "data_offset": 2048, 00:15:21.202 "data_size": 63488 00:15:21.202 }, 00:15:21.202 { 00:15:21.202 "name": "pt2", 00:15:21.202 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:21.202 "is_configured": true, 00:15:21.202 "data_offset": 2048, 00:15:21.202 "data_size": 63488 00:15:21.202 }, 00:15:21.202 { 00:15:21.202 "name": null, 00:15:21.202 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:21.202 "is_configured": false, 00:15:21.202 "data_offset": 2048, 00:15:21.202 "data_size": 63488 00:15:21.202 } 00:15:21.202 ] 00:15:21.202 }' 00:15:21.202 13:35:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:21.202 13:35:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:21.768 13:35:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:15:21.768 13:35:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:15:21.768 13:35:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.768 13:35:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:21.768 13:35:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.768 13:35:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:15:21.768 13:35:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:21.768 13:35:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.768 13:35:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:21.768 [2024-11-20 13:35:21.096770] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:21.768 [2024-11-20 13:35:21.096851] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:21.768 [2024-11-20 13:35:21.096879] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:15:21.768 [2024-11-20 13:35:21.096893] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:21.768 [2024-11-20 13:35:21.097405] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:21.768 [2024-11-20 13:35:21.097436] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:21.769 [2024-11-20 13:35:21.097531] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:15:21.769 [2024-11-20 13:35:21.097556] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:21.769 [2024-11-20 13:35:21.097683] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:15:21.769 [2024-11-20 13:35:21.097694] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:21.769 [2024-11-20 13:35:21.097958] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:15:21.769 [2024-11-20 13:35:21.098126] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:15:21.769 [2024-11-20 13:35:21.098145] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:15:21.769 [2024-11-20 13:35:21.098292] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:21.769 pt3 00:15:21.769 13:35:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.769 13:35:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:21.769 13:35:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:21.769 13:35:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:21.769 13:35:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:21.769 13:35:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:21.769 13:35:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:21.769 13:35:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:21.769 13:35:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:21.769 13:35:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:21.769 13:35:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:21.769 13:35:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:21.769 13:35:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:21.769 13:35:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.769 13:35:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:21.769 13:35:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.769 13:35:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:21.769 "name": "raid_bdev1", 00:15:21.769 "uuid": "82344784-f727-437b-82df-ae116a90ed5d", 00:15:21.769 "strip_size_kb": 0, 00:15:21.769 "state": "online", 00:15:21.769 "raid_level": "raid1", 00:15:21.769 "superblock": true, 00:15:21.769 "num_base_bdevs": 3, 00:15:21.769 "num_base_bdevs_discovered": 2, 00:15:21.769 "num_base_bdevs_operational": 2, 00:15:21.769 "base_bdevs_list": [ 00:15:21.769 { 00:15:21.769 "name": null, 00:15:21.769 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:21.769 "is_configured": false, 00:15:21.769 "data_offset": 2048, 00:15:21.769 "data_size": 63488 00:15:21.769 }, 00:15:21.769 { 00:15:21.769 "name": "pt2", 00:15:21.769 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:21.769 "is_configured": true, 00:15:21.769 "data_offset": 2048, 00:15:21.769 "data_size": 63488 00:15:21.769 }, 00:15:21.769 { 00:15:21.769 "name": "pt3", 00:15:21.769 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:21.769 "is_configured": true, 00:15:21.769 "data_offset": 2048, 00:15:21.769 "data_size": 63488 00:15:21.769 } 00:15:21.769 ] 00:15:21.769 }' 00:15:21.769 13:35:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:21.769 13:35:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:22.337 13:35:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:15:22.338 13:35:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:15:22.338 13:35:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.338 13:35:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:22.338 13:35:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.338 13:35:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:15:22.338 13:35:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:22.338 13:35:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:15:22.338 13:35:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.338 13:35:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:22.338 [2024-11-20 13:35:21.580472] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:22.338 13:35:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.338 13:35:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 82344784-f727-437b-82df-ae116a90ed5d '!=' 82344784-f727-437b-82df-ae116a90ed5d ']' 00:15:22.338 13:35:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 68404 00:15:22.338 13:35:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 68404 ']' 00:15:22.338 13:35:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 68404 00:15:22.338 13:35:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:15:22.338 13:35:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:22.338 13:35:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 68404 00:15:22.338 killing process with pid 68404 00:15:22.338 13:35:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:22.338 13:35:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:22.338 13:35:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 68404' 00:15:22.338 13:35:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 68404 00:15:22.338 [2024-11-20 13:35:21.661105] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:22.338 [2024-11-20 13:35:21.661219] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:22.338 13:35:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 68404 00:15:22.338 [2024-11-20 13:35:21.661283] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:22.338 [2024-11-20 13:35:21.661301] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:15:22.596 [2024-11-20 13:35:21.966386] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:23.973 ************************************ 00:15:23.973 END TEST raid_superblock_test 00:15:23.973 ************************************ 00:15:23.973 13:35:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:15:23.973 00:15:23.973 real 0m7.820s 00:15:23.973 user 0m12.202s 00:15:23.973 sys 0m1.599s 00:15:23.973 13:35:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:23.973 13:35:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:23.973 13:35:23 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 3 read 00:15:23.973 13:35:23 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:15:23.973 13:35:23 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:23.973 13:35:23 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:23.973 ************************************ 00:15:23.973 START TEST raid_read_error_test 00:15:23.973 ************************************ 00:15:23.973 13:35:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 3 read 00:15:23.973 13:35:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:15:23.973 13:35:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:15:23.973 13:35:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:15:23.973 13:35:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:15:23.973 13:35:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:23.973 13:35:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:15:23.973 13:35:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:15:23.973 13:35:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:23.973 13:35:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:15:23.973 13:35:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:15:23.973 13:35:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:23.973 13:35:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:15:23.973 13:35:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:15:23.973 13:35:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:23.973 13:35:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:15:23.973 13:35:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:15:23.973 13:35:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:15:23.973 13:35:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:15:23.974 13:35:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:15:23.974 13:35:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:15:23.974 13:35:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:15:23.974 13:35:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:15:23.974 13:35:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:15:23.974 13:35:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:15:23.974 13:35:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.yveVml6RHe 00:15:23.974 13:35:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=68844 00:15:23.974 13:35:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 68844 00:15:23.974 13:35:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 68844 ']' 00:15:23.974 13:35:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:15:23.974 13:35:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:23.974 13:35:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:23.974 13:35:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:23.974 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:23.974 13:35:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:23.974 13:35:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:23.974 [2024-11-20 13:35:23.326598] Starting SPDK v25.01-pre git sha1 82b85d9ca / DPDK 24.03.0 initialization... 00:15:23.974 [2024-11-20 13:35:23.326820] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68844 ] 00:15:24.232 [2024-11-20 13:35:23.525049] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:24.232 [2024-11-20 13:35:23.677554] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:24.491 [2024-11-20 13:35:23.909707] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:24.491 [2024-11-20 13:35:23.909784] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:24.749 13:35:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:24.749 13:35:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:15:24.749 13:35:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:15:24.749 13:35:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:24.749 13:35:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.749 13:35:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.749 BaseBdev1_malloc 00:15:24.749 13:35:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.749 13:35:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:15:24.749 13:35:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.749 13:35:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.749 true 00:15:24.749 13:35:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.749 13:35:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:15:24.749 13:35:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.749 13:35:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.749 [2024-11-20 13:35:24.204355] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:15:24.749 [2024-11-20 13:35:24.204423] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:24.749 [2024-11-20 13:35:24.204449] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:15:24.749 [2024-11-20 13:35:24.204466] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:24.749 [2024-11-20 13:35:24.206879] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:24.750 [2024-11-20 13:35:24.207083] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:24.750 BaseBdev1 00:15:24.750 13:35:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.750 13:35:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:15:24.750 13:35:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:24.750 13:35:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.750 13:35:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.008 BaseBdev2_malloc 00:15:25.008 13:35:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.009 13:35:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:15:25.009 13:35:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.009 13:35:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.009 true 00:15:25.009 13:35:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.009 13:35:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:15:25.009 13:35:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.009 13:35:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.009 [2024-11-20 13:35:24.272703] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:15:25.009 [2024-11-20 13:35:24.272766] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:25.009 [2024-11-20 13:35:24.272788] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:15:25.009 [2024-11-20 13:35:24.272805] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:25.009 [2024-11-20 13:35:24.275212] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:25.009 [2024-11-20 13:35:24.275261] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:25.009 BaseBdev2 00:15:25.009 13:35:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.009 13:35:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:15:25.009 13:35:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:15:25.009 13:35:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.009 13:35:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.009 BaseBdev3_malloc 00:15:25.009 13:35:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.009 13:35:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:15:25.009 13:35:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.009 13:35:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.009 true 00:15:25.009 13:35:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.009 13:35:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:15:25.009 13:35:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.009 13:35:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.009 [2024-11-20 13:35:24.350342] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:15:25.009 [2024-11-20 13:35:24.350399] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:25.009 [2024-11-20 13:35:24.350421] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:15:25.009 [2024-11-20 13:35:24.350437] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:25.009 [2024-11-20 13:35:24.352817] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:25.009 [2024-11-20 13:35:24.352992] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:15:25.009 BaseBdev3 00:15:25.009 13:35:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.009 13:35:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:15:25.009 13:35:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.009 13:35:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.009 [2024-11-20 13:35:24.362404] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:25.009 [2024-11-20 13:35:24.364462] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:25.009 [2024-11-20 13:35:24.364540] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:25.009 [2024-11-20 13:35:24.364750] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:15:25.009 [2024-11-20 13:35:24.364765] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:25.009 [2024-11-20 13:35:24.365026] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:15:25.009 [2024-11-20 13:35:24.365220] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:15:25.009 [2024-11-20 13:35:24.365236] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:15:25.009 [2024-11-20 13:35:24.365390] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:25.009 13:35:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.009 13:35:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:25.009 13:35:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:25.009 13:35:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:25.009 13:35:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:25.009 13:35:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:25.009 13:35:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:25.009 13:35:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:25.009 13:35:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:25.009 13:35:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:25.009 13:35:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:25.009 13:35:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:25.009 13:35:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:25.009 13:35:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.009 13:35:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.009 13:35:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.009 13:35:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:25.009 "name": "raid_bdev1", 00:15:25.009 "uuid": "bfbce4e5-9faa-4574-8d9b-a314f4d3a2c1", 00:15:25.009 "strip_size_kb": 0, 00:15:25.009 "state": "online", 00:15:25.009 "raid_level": "raid1", 00:15:25.009 "superblock": true, 00:15:25.009 "num_base_bdevs": 3, 00:15:25.009 "num_base_bdevs_discovered": 3, 00:15:25.009 "num_base_bdevs_operational": 3, 00:15:25.009 "base_bdevs_list": [ 00:15:25.009 { 00:15:25.009 "name": "BaseBdev1", 00:15:25.009 "uuid": "05582902-264e-5cf4-a704-60b893b6e444", 00:15:25.009 "is_configured": true, 00:15:25.009 "data_offset": 2048, 00:15:25.009 "data_size": 63488 00:15:25.009 }, 00:15:25.009 { 00:15:25.009 "name": "BaseBdev2", 00:15:25.009 "uuid": "cd40da7a-b338-50c9-a568-8bedaf576e24", 00:15:25.009 "is_configured": true, 00:15:25.009 "data_offset": 2048, 00:15:25.009 "data_size": 63488 00:15:25.009 }, 00:15:25.009 { 00:15:25.009 "name": "BaseBdev3", 00:15:25.009 "uuid": "33118588-3475-5f92-b034-96cbbe5891d8", 00:15:25.009 "is_configured": true, 00:15:25.009 "data_offset": 2048, 00:15:25.009 "data_size": 63488 00:15:25.009 } 00:15:25.009 ] 00:15:25.009 }' 00:15:25.009 13:35:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:25.009 13:35:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.575 13:35:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:15:25.575 13:35:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:15:25.575 [2024-11-20 13:35:24.862925] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:15:26.512 13:35:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:15:26.512 13:35:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.512 13:35:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.512 13:35:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.512 13:35:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:15:26.512 13:35:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:15:26.512 13:35:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:15:26.512 13:35:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:15:26.512 13:35:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:26.512 13:35:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:26.512 13:35:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:26.512 13:35:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:26.512 13:35:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:26.512 13:35:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:26.512 13:35:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:26.512 13:35:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:26.512 13:35:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:26.512 13:35:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:26.512 13:35:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:26.512 13:35:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:26.512 13:35:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.512 13:35:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.512 13:35:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.512 13:35:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:26.512 "name": "raid_bdev1", 00:15:26.512 "uuid": "bfbce4e5-9faa-4574-8d9b-a314f4d3a2c1", 00:15:26.512 "strip_size_kb": 0, 00:15:26.512 "state": "online", 00:15:26.512 "raid_level": "raid1", 00:15:26.512 "superblock": true, 00:15:26.512 "num_base_bdevs": 3, 00:15:26.512 "num_base_bdevs_discovered": 3, 00:15:26.512 "num_base_bdevs_operational": 3, 00:15:26.512 "base_bdevs_list": [ 00:15:26.512 { 00:15:26.512 "name": "BaseBdev1", 00:15:26.512 "uuid": "05582902-264e-5cf4-a704-60b893b6e444", 00:15:26.512 "is_configured": true, 00:15:26.512 "data_offset": 2048, 00:15:26.512 "data_size": 63488 00:15:26.512 }, 00:15:26.512 { 00:15:26.512 "name": "BaseBdev2", 00:15:26.513 "uuid": "cd40da7a-b338-50c9-a568-8bedaf576e24", 00:15:26.513 "is_configured": true, 00:15:26.513 "data_offset": 2048, 00:15:26.513 "data_size": 63488 00:15:26.513 }, 00:15:26.513 { 00:15:26.513 "name": "BaseBdev3", 00:15:26.513 "uuid": "33118588-3475-5f92-b034-96cbbe5891d8", 00:15:26.513 "is_configured": true, 00:15:26.513 "data_offset": 2048, 00:15:26.513 "data_size": 63488 00:15:26.513 } 00:15:26.513 ] 00:15:26.513 }' 00:15:26.513 13:35:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:26.513 13:35:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.772 13:35:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:26.772 13:35:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.772 13:35:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.030 [2024-11-20 13:35:26.259692] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:27.030 [2024-11-20 13:35:26.259734] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:27.030 [2024-11-20 13:35:26.262706] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:27.030 [2024-11-20 13:35:26.262769] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:27.030 [2024-11-20 13:35:26.262883] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:27.031 [2024-11-20 13:35:26.262897] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:15:27.031 13:35:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.031 { 00:15:27.031 "results": [ 00:15:27.031 { 00:15:27.031 "job": "raid_bdev1", 00:15:27.031 "core_mask": "0x1", 00:15:27.031 "workload": "randrw", 00:15:27.031 "percentage": 50, 00:15:27.031 "status": "finished", 00:15:27.031 "queue_depth": 1, 00:15:27.031 "io_size": 131072, 00:15:27.031 "runtime": 1.39705, 00:15:27.031 "iops": 13024.587523710676, 00:15:27.031 "mibps": 1628.0734404638345, 00:15:27.031 "io_failed": 0, 00:15:27.031 "io_timeout": 0, 00:15:27.031 "avg_latency_us": 73.72868356256417, 00:15:27.031 "min_latency_us": 25.39437751004016, 00:15:27.031 "max_latency_us": 1473.9020080321286 00:15:27.031 } 00:15:27.031 ], 00:15:27.031 "core_count": 1 00:15:27.031 } 00:15:27.031 13:35:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 68844 00:15:27.031 13:35:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 68844 ']' 00:15:27.031 13:35:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 68844 00:15:27.031 13:35:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:15:27.031 13:35:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:27.031 13:35:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 68844 00:15:27.031 killing process with pid 68844 00:15:27.031 13:35:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:27.031 13:35:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:27.031 13:35:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 68844' 00:15:27.031 13:35:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 68844 00:15:27.031 [2024-11-20 13:35:26.308842] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:27.031 13:35:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 68844 00:15:27.290 [2024-11-20 13:35:26.557762] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:28.668 13:35:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:15:28.668 13:35:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:15:28.668 13:35:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.yveVml6RHe 00:15:28.668 13:35:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:15:28.668 13:35:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:15:28.668 13:35:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:28.668 13:35:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:15:28.668 13:35:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:15:28.668 00:15:28.668 real 0m4.661s 00:15:28.668 user 0m5.455s 00:15:28.668 sys 0m0.612s 00:15:28.668 13:35:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:28.668 13:35:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:28.668 ************************************ 00:15:28.668 END TEST raid_read_error_test 00:15:28.668 ************************************ 00:15:28.668 13:35:27 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 3 write 00:15:28.668 13:35:27 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:15:28.668 13:35:27 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:28.668 13:35:27 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:28.668 ************************************ 00:15:28.668 START TEST raid_write_error_test 00:15:28.668 ************************************ 00:15:28.668 13:35:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 3 write 00:15:28.668 13:35:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:15:28.668 13:35:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:15:28.668 13:35:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:15:28.668 13:35:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:15:28.668 13:35:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:28.668 13:35:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:15:28.668 13:35:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:15:28.668 13:35:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:28.668 13:35:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:15:28.668 13:35:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:15:28.668 13:35:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:28.668 13:35:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:15:28.668 13:35:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:15:28.668 13:35:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:28.668 13:35:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:15:28.668 13:35:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:15:28.668 13:35:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:15:28.668 13:35:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:15:28.668 13:35:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:15:28.668 13:35:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:15:28.668 13:35:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:15:28.668 13:35:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:15:28.668 13:35:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:15:28.668 13:35:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:15:28.668 13:35:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.5802Brsvk7 00:15:28.668 13:35:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=68995 00:15:28.668 13:35:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 68995 00:15:28.668 13:35:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:15:28.668 13:35:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 68995 ']' 00:15:28.668 13:35:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:28.668 13:35:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:28.668 13:35:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:28.668 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:28.668 13:35:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:28.668 13:35:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:28.668 [2024-11-20 13:35:28.050806] Starting SPDK v25.01-pre git sha1 82b85d9ca / DPDK 24.03.0 initialization... 00:15:28.668 [2024-11-20 13:35:28.051151] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68995 ] 00:15:28.927 [2024-11-20 13:35:28.237780] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:28.927 [2024-11-20 13:35:28.367484] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:29.185 [2024-11-20 13:35:28.589695] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:29.185 [2024-11-20 13:35:28.590017] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:29.754 13:35:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:29.754 13:35:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:15:29.754 13:35:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:15:29.754 13:35:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:29.754 13:35:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.754 13:35:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:29.754 BaseBdev1_malloc 00:15:29.754 13:35:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.754 13:35:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:15:29.754 13:35:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.754 13:35:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:29.754 true 00:15:29.754 13:35:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.754 13:35:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:15:29.754 13:35:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.754 13:35:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:29.754 [2024-11-20 13:35:29.091647] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:15:29.754 [2024-11-20 13:35:29.091876] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:29.754 [2024-11-20 13:35:29.091921] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:15:29.754 [2024-11-20 13:35:29.091940] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:29.754 [2024-11-20 13:35:29.094675] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:29.754 [2024-11-20 13:35:29.094740] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:29.754 BaseBdev1 00:15:29.754 13:35:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.754 13:35:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:15:29.754 13:35:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:29.754 13:35:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.754 13:35:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:29.754 BaseBdev2_malloc 00:15:29.754 13:35:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.754 13:35:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:15:29.754 13:35:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.754 13:35:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:29.754 true 00:15:29.754 13:35:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.754 13:35:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:15:29.754 13:35:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.754 13:35:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:29.754 [2024-11-20 13:35:29.162621] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:15:29.754 [2024-11-20 13:35:29.162690] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:29.754 [2024-11-20 13:35:29.162713] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:15:29.754 [2024-11-20 13:35:29.162729] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:29.754 [2024-11-20 13:35:29.165514] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:29.754 [2024-11-20 13:35:29.165563] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:29.754 BaseBdev2 00:15:29.754 13:35:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.754 13:35:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:15:29.754 13:35:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:15:29.754 13:35:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.754 13:35:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:29.754 BaseBdev3_malloc 00:15:29.754 13:35:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.754 13:35:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:15:29.754 13:35:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.754 13:35:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:29.754 true 00:15:30.012 13:35:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.012 13:35:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:15:30.012 13:35:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.012 13:35:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:30.012 [2024-11-20 13:35:29.244591] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:15:30.012 [2024-11-20 13:35:29.244790] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:30.012 [2024-11-20 13:35:29.244826] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:15:30.012 [2024-11-20 13:35:29.244842] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:30.012 [2024-11-20 13:35:29.247641] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:30.012 [2024-11-20 13:35:29.247697] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:15:30.012 BaseBdev3 00:15:30.012 13:35:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.012 13:35:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:15:30.012 13:35:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.012 13:35:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:30.012 [2024-11-20 13:35:29.256716] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:30.012 [2024-11-20 13:35:29.258991] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:30.012 [2024-11-20 13:35:29.259112] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:30.012 [2024-11-20 13:35:29.259334] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:15:30.012 [2024-11-20 13:35:29.259348] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:30.012 [2024-11-20 13:35:29.259648] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:15:30.012 [2024-11-20 13:35:29.259829] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:15:30.012 [2024-11-20 13:35:29.259842] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:15:30.012 [2024-11-20 13:35:29.260026] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:30.012 13:35:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.012 13:35:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:30.012 13:35:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:30.012 13:35:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:30.012 13:35:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:30.012 13:35:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:30.012 13:35:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:30.012 13:35:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:30.012 13:35:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:30.012 13:35:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:30.012 13:35:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:30.012 13:35:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:30.012 13:35:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:30.012 13:35:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.012 13:35:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:30.012 13:35:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.012 13:35:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:30.012 "name": "raid_bdev1", 00:15:30.012 "uuid": "ce239021-9ae4-47df-91cb-8a5800b7e453", 00:15:30.012 "strip_size_kb": 0, 00:15:30.012 "state": "online", 00:15:30.012 "raid_level": "raid1", 00:15:30.012 "superblock": true, 00:15:30.012 "num_base_bdevs": 3, 00:15:30.012 "num_base_bdevs_discovered": 3, 00:15:30.012 "num_base_bdevs_operational": 3, 00:15:30.012 "base_bdevs_list": [ 00:15:30.012 { 00:15:30.012 "name": "BaseBdev1", 00:15:30.012 "uuid": "0bde4354-e742-51cd-84dc-5978aeb38786", 00:15:30.012 "is_configured": true, 00:15:30.012 "data_offset": 2048, 00:15:30.012 "data_size": 63488 00:15:30.012 }, 00:15:30.012 { 00:15:30.012 "name": "BaseBdev2", 00:15:30.012 "uuid": "4f46196a-a1c3-5813-85bd-a10fa9eb1012", 00:15:30.012 "is_configured": true, 00:15:30.012 "data_offset": 2048, 00:15:30.012 "data_size": 63488 00:15:30.012 }, 00:15:30.012 { 00:15:30.012 "name": "BaseBdev3", 00:15:30.012 "uuid": "cf74fc83-4e44-57d8-8747-7da65c63874f", 00:15:30.012 "is_configured": true, 00:15:30.012 "data_offset": 2048, 00:15:30.012 "data_size": 63488 00:15:30.012 } 00:15:30.012 ] 00:15:30.012 }' 00:15:30.012 13:35:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:30.012 13:35:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:30.270 13:35:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:15:30.270 13:35:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:15:30.530 [2024-11-20 13:35:29.849203] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:15:31.463 13:35:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:15:31.463 13:35:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.463 13:35:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:31.463 [2024-11-20 13:35:30.761490] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:15:31.463 [2024-11-20 13:35:30.761553] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:31.463 [2024-11-20 13:35:30.761777] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006700 00:15:31.463 13:35:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.463 13:35:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:15:31.463 13:35:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:15:31.463 13:35:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:15:31.463 13:35:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:15:31.463 13:35:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:31.463 13:35:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:31.463 13:35:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:31.463 13:35:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:31.463 13:35:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:31.463 13:35:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:31.463 13:35:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:31.463 13:35:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:31.463 13:35:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:31.463 13:35:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:31.463 13:35:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:31.463 13:35:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:31.463 13:35:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.463 13:35:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:31.463 13:35:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.463 13:35:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:31.463 "name": "raid_bdev1", 00:15:31.463 "uuid": "ce239021-9ae4-47df-91cb-8a5800b7e453", 00:15:31.463 "strip_size_kb": 0, 00:15:31.463 "state": "online", 00:15:31.463 "raid_level": "raid1", 00:15:31.463 "superblock": true, 00:15:31.463 "num_base_bdevs": 3, 00:15:31.463 "num_base_bdevs_discovered": 2, 00:15:31.463 "num_base_bdevs_operational": 2, 00:15:31.463 "base_bdevs_list": [ 00:15:31.463 { 00:15:31.463 "name": null, 00:15:31.463 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:31.463 "is_configured": false, 00:15:31.463 "data_offset": 0, 00:15:31.463 "data_size": 63488 00:15:31.463 }, 00:15:31.463 { 00:15:31.463 "name": "BaseBdev2", 00:15:31.463 "uuid": "4f46196a-a1c3-5813-85bd-a10fa9eb1012", 00:15:31.463 "is_configured": true, 00:15:31.463 "data_offset": 2048, 00:15:31.463 "data_size": 63488 00:15:31.463 }, 00:15:31.463 { 00:15:31.463 "name": "BaseBdev3", 00:15:31.463 "uuid": "cf74fc83-4e44-57d8-8747-7da65c63874f", 00:15:31.463 "is_configured": true, 00:15:31.463 "data_offset": 2048, 00:15:31.463 "data_size": 63488 00:15:31.463 } 00:15:31.463 ] 00:15:31.463 }' 00:15:31.463 13:35:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:31.463 13:35:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:31.721 13:35:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:31.721 13:35:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.721 13:35:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:31.721 [2024-11-20 13:35:31.188564] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:31.721 [2024-11-20 13:35:31.188610] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:31.721 [2024-11-20 13:35:31.191550] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:31.721 [2024-11-20 13:35:31.191785] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:31.721 [2024-11-20 13:35:31.191911] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:31.721 [2024-11-20 13:35:31.191935] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:15:31.721 { 00:15:31.721 "results": [ 00:15:31.721 { 00:15:31.721 "job": "raid_bdev1", 00:15:31.722 "core_mask": "0x1", 00:15:31.722 "workload": "randrw", 00:15:31.722 "percentage": 50, 00:15:31.722 "status": "finished", 00:15:31.722 "queue_depth": 1, 00:15:31.722 "io_size": 131072, 00:15:31.722 "runtime": 1.339354, 00:15:31.722 "iops": 13749.165642541106, 00:15:31.722 "mibps": 1718.6457053176382, 00:15:31.722 "io_failed": 0, 00:15:31.722 "io_timeout": 0, 00:15:31.722 "avg_latency_us": 69.73313522348967, 00:15:31.722 "min_latency_us": 24.571887550200803, 00:15:31.722 "max_latency_us": 1539.701204819277 00:15:31.722 } 00:15:31.722 ], 00:15:31.722 "core_count": 1 00:15:31.722 } 00:15:31.722 13:35:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.722 13:35:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 68995 00:15:31.722 13:35:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 68995 ']' 00:15:31.722 13:35:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 68995 00:15:31.722 13:35:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:15:31.722 13:35:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:31.980 13:35:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 68995 00:15:31.980 killing process with pid 68995 00:15:31.980 13:35:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:31.980 13:35:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:31.980 13:35:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 68995' 00:15:31.980 13:35:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 68995 00:15:31.980 [2024-11-20 13:35:31.243982] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:31.980 13:35:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 68995 00:15:32.237 [2024-11-20 13:35:31.493502] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:33.612 13:35:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.5802Brsvk7 00:15:33.612 13:35:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:15:33.612 13:35:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:15:33.612 13:35:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:15:33.612 13:35:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:15:33.612 13:35:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:33.612 13:35:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:15:33.612 13:35:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:15:33.612 00:15:33.612 real 0m4.806s 00:15:33.612 user 0m5.760s 00:15:33.612 sys 0m0.658s 00:15:33.612 ************************************ 00:15:33.612 END TEST raid_write_error_test 00:15:33.612 ************************************ 00:15:33.612 13:35:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:33.612 13:35:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:33.612 13:35:32 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:15:33.612 13:35:32 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:15:33.612 13:35:32 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 4 false 00:15:33.612 13:35:32 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:15:33.612 13:35:32 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:33.612 13:35:32 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:33.612 ************************************ 00:15:33.612 START TEST raid_state_function_test 00:15:33.612 ************************************ 00:15:33.612 13:35:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 4 false 00:15:33.612 13:35:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:15:33.612 13:35:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:15:33.612 13:35:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:15:33.613 13:35:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:15:33.613 13:35:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:15:33.613 13:35:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:33.613 13:35:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:15:33.613 13:35:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:33.613 13:35:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:33.613 13:35:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:15:33.613 13:35:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:33.613 13:35:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:33.613 13:35:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:15:33.613 13:35:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:33.613 13:35:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:33.613 13:35:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:15:33.613 13:35:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:33.613 13:35:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:33.613 13:35:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:15:33.613 13:35:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:15:33.613 13:35:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:15:33.613 13:35:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:15:33.613 13:35:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:15:33.613 13:35:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:15:33.613 13:35:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:15:33.613 13:35:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:15:33.613 13:35:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:15:33.613 13:35:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:15:33.613 13:35:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:15:33.613 Process raid pid: 69139 00:15:33.613 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:33.613 13:35:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=69139 00:15:33.613 13:35:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:15:33.613 13:35:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 69139' 00:15:33.613 13:35:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 69139 00:15:33.613 13:35:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 69139 ']' 00:15:33.613 13:35:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:33.613 13:35:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:33.613 13:35:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:33.613 13:35:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:33.613 13:35:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:33.613 [2024-11-20 13:35:32.911443] Starting SPDK v25.01-pre git sha1 82b85d9ca / DPDK 24.03.0 initialization... 00:15:33.613 [2024-11-20 13:35:32.911759] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:33.613 [2024-11-20 13:35:33.086184] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:33.871 [2024-11-20 13:35:33.242431] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:34.129 [2024-11-20 13:35:33.472642] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:34.129 [2024-11-20 13:35:33.472690] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:34.388 13:35:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:34.388 13:35:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:15:34.388 13:35:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:34.388 13:35:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.388 13:35:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.388 [2024-11-20 13:35:33.780333] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:34.388 [2024-11-20 13:35:33.780397] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:34.388 [2024-11-20 13:35:33.780410] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:34.388 [2024-11-20 13:35:33.780423] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:34.388 [2024-11-20 13:35:33.780431] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:34.388 [2024-11-20 13:35:33.780444] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:34.388 [2024-11-20 13:35:33.780452] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:34.388 [2024-11-20 13:35:33.780465] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:34.388 13:35:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.388 13:35:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:15:34.388 13:35:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:34.388 13:35:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:34.388 13:35:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:34.388 13:35:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:34.388 13:35:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:34.388 13:35:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:34.388 13:35:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:34.388 13:35:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:34.388 13:35:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:34.388 13:35:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:34.388 13:35:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:34.388 13:35:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.388 13:35:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.388 13:35:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.388 13:35:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:34.388 "name": "Existed_Raid", 00:15:34.388 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:34.388 "strip_size_kb": 64, 00:15:34.388 "state": "configuring", 00:15:34.388 "raid_level": "raid0", 00:15:34.388 "superblock": false, 00:15:34.388 "num_base_bdevs": 4, 00:15:34.388 "num_base_bdevs_discovered": 0, 00:15:34.388 "num_base_bdevs_operational": 4, 00:15:34.388 "base_bdevs_list": [ 00:15:34.388 { 00:15:34.388 "name": "BaseBdev1", 00:15:34.388 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:34.388 "is_configured": false, 00:15:34.388 "data_offset": 0, 00:15:34.388 "data_size": 0 00:15:34.388 }, 00:15:34.388 { 00:15:34.388 "name": "BaseBdev2", 00:15:34.388 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:34.388 "is_configured": false, 00:15:34.388 "data_offset": 0, 00:15:34.388 "data_size": 0 00:15:34.388 }, 00:15:34.388 { 00:15:34.388 "name": "BaseBdev3", 00:15:34.388 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:34.388 "is_configured": false, 00:15:34.388 "data_offset": 0, 00:15:34.388 "data_size": 0 00:15:34.388 }, 00:15:34.388 { 00:15:34.388 "name": "BaseBdev4", 00:15:34.388 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:34.388 "is_configured": false, 00:15:34.388 "data_offset": 0, 00:15:34.388 "data_size": 0 00:15:34.388 } 00:15:34.388 ] 00:15:34.388 }' 00:15:34.388 13:35:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:34.388 13:35:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.956 13:35:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:34.956 13:35:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.956 13:35:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.956 [2024-11-20 13:35:34.227623] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:34.956 [2024-11-20 13:35:34.227801] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:15:34.956 13:35:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.956 13:35:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:34.956 13:35:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.956 13:35:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.956 [2024-11-20 13:35:34.239612] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:34.956 [2024-11-20 13:35:34.239665] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:34.956 [2024-11-20 13:35:34.239677] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:34.956 [2024-11-20 13:35:34.239690] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:34.956 [2024-11-20 13:35:34.239699] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:34.956 [2024-11-20 13:35:34.239712] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:34.956 [2024-11-20 13:35:34.239720] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:34.956 [2024-11-20 13:35:34.239733] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:34.956 13:35:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.956 13:35:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:34.956 13:35:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.956 13:35:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.956 [2024-11-20 13:35:34.292425] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:34.956 BaseBdev1 00:15:34.956 13:35:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.956 13:35:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:15:34.956 13:35:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:15:34.956 13:35:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:34.956 13:35:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:34.956 13:35:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:34.956 13:35:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:34.956 13:35:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:34.956 13:35:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.956 13:35:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.956 13:35:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.956 13:35:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:34.956 13:35:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.956 13:35:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.956 [ 00:15:34.956 { 00:15:34.956 "name": "BaseBdev1", 00:15:34.956 "aliases": [ 00:15:34.956 "f500dc54-a512-4d65-8e99-abfacb91e25f" 00:15:34.956 ], 00:15:34.956 "product_name": "Malloc disk", 00:15:34.956 "block_size": 512, 00:15:34.956 "num_blocks": 65536, 00:15:34.956 "uuid": "f500dc54-a512-4d65-8e99-abfacb91e25f", 00:15:34.956 "assigned_rate_limits": { 00:15:34.956 "rw_ios_per_sec": 0, 00:15:34.956 "rw_mbytes_per_sec": 0, 00:15:34.956 "r_mbytes_per_sec": 0, 00:15:34.956 "w_mbytes_per_sec": 0 00:15:34.956 }, 00:15:34.956 "claimed": true, 00:15:34.956 "claim_type": "exclusive_write", 00:15:34.956 "zoned": false, 00:15:34.956 "supported_io_types": { 00:15:34.956 "read": true, 00:15:34.956 "write": true, 00:15:34.956 "unmap": true, 00:15:34.956 "flush": true, 00:15:34.956 "reset": true, 00:15:34.956 "nvme_admin": false, 00:15:34.956 "nvme_io": false, 00:15:34.956 "nvme_io_md": false, 00:15:34.956 "write_zeroes": true, 00:15:34.956 "zcopy": true, 00:15:34.956 "get_zone_info": false, 00:15:34.956 "zone_management": false, 00:15:34.956 "zone_append": false, 00:15:34.956 "compare": false, 00:15:34.956 "compare_and_write": false, 00:15:34.956 "abort": true, 00:15:34.956 "seek_hole": false, 00:15:34.956 "seek_data": false, 00:15:34.956 "copy": true, 00:15:34.956 "nvme_iov_md": false 00:15:34.956 }, 00:15:34.956 "memory_domains": [ 00:15:34.956 { 00:15:34.956 "dma_device_id": "system", 00:15:34.956 "dma_device_type": 1 00:15:34.956 }, 00:15:34.956 { 00:15:34.956 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:34.956 "dma_device_type": 2 00:15:34.956 } 00:15:34.956 ], 00:15:34.956 "driver_specific": {} 00:15:34.956 } 00:15:34.956 ] 00:15:34.956 13:35:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.956 13:35:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:34.956 13:35:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:15:34.956 13:35:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:34.956 13:35:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:34.956 13:35:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:34.956 13:35:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:34.956 13:35:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:34.956 13:35:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:34.956 13:35:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:34.956 13:35:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:34.956 13:35:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:34.956 13:35:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:34.956 13:35:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.956 13:35:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:34.956 13:35:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.956 13:35:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.956 13:35:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:34.956 "name": "Existed_Raid", 00:15:34.956 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:34.956 "strip_size_kb": 64, 00:15:34.956 "state": "configuring", 00:15:34.956 "raid_level": "raid0", 00:15:34.956 "superblock": false, 00:15:34.956 "num_base_bdevs": 4, 00:15:34.956 "num_base_bdevs_discovered": 1, 00:15:34.956 "num_base_bdevs_operational": 4, 00:15:34.956 "base_bdevs_list": [ 00:15:34.956 { 00:15:34.956 "name": "BaseBdev1", 00:15:34.956 "uuid": "f500dc54-a512-4d65-8e99-abfacb91e25f", 00:15:34.956 "is_configured": true, 00:15:34.956 "data_offset": 0, 00:15:34.956 "data_size": 65536 00:15:34.956 }, 00:15:34.956 { 00:15:34.956 "name": "BaseBdev2", 00:15:34.956 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:34.956 "is_configured": false, 00:15:34.956 "data_offset": 0, 00:15:34.956 "data_size": 0 00:15:34.956 }, 00:15:34.956 { 00:15:34.956 "name": "BaseBdev3", 00:15:34.956 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:34.956 "is_configured": false, 00:15:34.956 "data_offset": 0, 00:15:34.957 "data_size": 0 00:15:34.957 }, 00:15:34.957 { 00:15:34.957 "name": "BaseBdev4", 00:15:34.957 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:34.957 "is_configured": false, 00:15:34.957 "data_offset": 0, 00:15:34.957 "data_size": 0 00:15:34.957 } 00:15:34.957 ] 00:15:34.957 }' 00:15:34.957 13:35:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:34.957 13:35:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:35.525 13:35:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:35.525 13:35:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.525 13:35:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:35.525 [2024-11-20 13:35:34.807755] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:35.525 [2024-11-20 13:35:34.807815] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:15:35.525 13:35:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.525 13:35:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:35.525 13:35:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.525 13:35:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:35.525 [2024-11-20 13:35:34.815811] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:35.525 [2024-11-20 13:35:34.818213] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:35.525 [2024-11-20 13:35:34.818263] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:35.525 [2024-11-20 13:35:34.818275] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:35.525 [2024-11-20 13:35:34.818291] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:35.525 [2024-11-20 13:35:34.818313] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:35.525 [2024-11-20 13:35:34.818326] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:35.525 13:35:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.525 13:35:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:15:35.525 13:35:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:35.525 13:35:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:15:35.525 13:35:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:35.525 13:35:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:35.525 13:35:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:35.525 13:35:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:35.525 13:35:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:35.525 13:35:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:35.525 13:35:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:35.525 13:35:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:35.525 13:35:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:35.525 13:35:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:35.525 13:35:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:35.525 13:35:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.525 13:35:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:35.525 13:35:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.525 13:35:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:35.525 "name": "Existed_Raid", 00:15:35.525 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:35.525 "strip_size_kb": 64, 00:15:35.525 "state": "configuring", 00:15:35.525 "raid_level": "raid0", 00:15:35.525 "superblock": false, 00:15:35.525 "num_base_bdevs": 4, 00:15:35.525 "num_base_bdevs_discovered": 1, 00:15:35.525 "num_base_bdevs_operational": 4, 00:15:35.525 "base_bdevs_list": [ 00:15:35.525 { 00:15:35.525 "name": "BaseBdev1", 00:15:35.525 "uuid": "f500dc54-a512-4d65-8e99-abfacb91e25f", 00:15:35.525 "is_configured": true, 00:15:35.525 "data_offset": 0, 00:15:35.525 "data_size": 65536 00:15:35.525 }, 00:15:35.525 { 00:15:35.525 "name": "BaseBdev2", 00:15:35.525 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:35.525 "is_configured": false, 00:15:35.525 "data_offset": 0, 00:15:35.525 "data_size": 0 00:15:35.525 }, 00:15:35.525 { 00:15:35.525 "name": "BaseBdev3", 00:15:35.525 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:35.525 "is_configured": false, 00:15:35.525 "data_offset": 0, 00:15:35.525 "data_size": 0 00:15:35.525 }, 00:15:35.525 { 00:15:35.525 "name": "BaseBdev4", 00:15:35.525 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:35.525 "is_configured": false, 00:15:35.525 "data_offset": 0, 00:15:35.525 "data_size": 0 00:15:35.525 } 00:15:35.525 ] 00:15:35.525 }' 00:15:35.525 13:35:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:35.525 13:35:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:36.094 13:35:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:36.094 13:35:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.094 13:35:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:36.094 [2024-11-20 13:35:35.317790] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:36.094 BaseBdev2 00:15:36.094 13:35:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.094 13:35:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:15:36.094 13:35:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:15:36.094 13:35:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:36.094 13:35:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:36.094 13:35:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:36.094 13:35:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:36.094 13:35:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:36.094 13:35:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.094 13:35:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:36.094 13:35:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.094 13:35:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:36.094 13:35:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.094 13:35:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:36.094 [ 00:15:36.094 { 00:15:36.094 "name": "BaseBdev2", 00:15:36.094 "aliases": [ 00:15:36.094 "46608338-c908-4816-a030-8023a6eb5b6d" 00:15:36.094 ], 00:15:36.094 "product_name": "Malloc disk", 00:15:36.094 "block_size": 512, 00:15:36.094 "num_blocks": 65536, 00:15:36.094 "uuid": "46608338-c908-4816-a030-8023a6eb5b6d", 00:15:36.094 "assigned_rate_limits": { 00:15:36.094 "rw_ios_per_sec": 0, 00:15:36.094 "rw_mbytes_per_sec": 0, 00:15:36.094 "r_mbytes_per_sec": 0, 00:15:36.094 "w_mbytes_per_sec": 0 00:15:36.094 }, 00:15:36.094 "claimed": true, 00:15:36.094 "claim_type": "exclusive_write", 00:15:36.094 "zoned": false, 00:15:36.094 "supported_io_types": { 00:15:36.094 "read": true, 00:15:36.094 "write": true, 00:15:36.094 "unmap": true, 00:15:36.094 "flush": true, 00:15:36.094 "reset": true, 00:15:36.094 "nvme_admin": false, 00:15:36.094 "nvme_io": false, 00:15:36.094 "nvme_io_md": false, 00:15:36.094 "write_zeroes": true, 00:15:36.094 "zcopy": true, 00:15:36.094 "get_zone_info": false, 00:15:36.094 "zone_management": false, 00:15:36.094 "zone_append": false, 00:15:36.094 "compare": false, 00:15:36.094 "compare_and_write": false, 00:15:36.094 "abort": true, 00:15:36.094 "seek_hole": false, 00:15:36.094 "seek_data": false, 00:15:36.094 "copy": true, 00:15:36.094 "nvme_iov_md": false 00:15:36.094 }, 00:15:36.094 "memory_domains": [ 00:15:36.094 { 00:15:36.094 "dma_device_id": "system", 00:15:36.094 "dma_device_type": 1 00:15:36.094 }, 00:15:36.094 { 00:15:36.094 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:36.094 "dma_device_type": 2 00:15:36.094 } 00:15:36.094 ], 00:15:36.094 "driver_specific": {} 00:15:36.094 } 00:15:36.094 ] 00:15:36.094 13:35:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.094 13:35:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:36.094 13:35:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:36.094 13:35:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:36.094 13:35:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:15:36.094 13:35:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:36.094 13:35:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:36.094 13:35:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:36.094 13:35:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:36.094 13:35:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:36.094 13:35:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:36.094 13:35:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:36.094 13:35:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:36.094 13:35:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:36.094 13:35:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:36.094 13:35:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:36.094 13:35:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.094 13:35:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:36.094 13:35:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.094 13:35:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:36.094 "name": "Existed_Raid", 00:15:36.094 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:36.094 "strip_size_kb": 64, 00:15:36.094 "state": "configuring", 00:15:36.094 "raid_level": "raid0", 00:15:36.094 "superblock": false, 00:15:36.094 "num_base_bdevs": 4, 00:15:36.094 "num_base_bdevs_discovered": 2, 00:15:36.094 "num_base_bdevs_operational": 4, 00:15:36.094 "base_bdevs_list": [ 00:15:36.094 { 00:15:36.094 "name": "BaseBdev1", 00:15:36.094 "uuid": "f500dc54-a512-4d65-8e99-abfacb91e25f", 00:15:36.094 "is_configured": true, 00:15:36.094 "data_offset": 0, 00:15:36.094 "data_size": 65536 00:15:36.094 }, 00:15:36.094 { 00:15:36.094 "name": "BaseBdev2", 00:15:36.094 "uuid": "46608338-c908-4816-a030-8023a6eb5b6d", 00:15:36.094 "is_configured": true, 00:15:36.094 "data_offset": 0, 00:15:36.094 "data_size": 65536 00:15:36.094 }, 00:15:36.094 { 00:15:36.094 "name": "BaseBdev3", 00:15:36.094 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:36.094 "is_configured": false, 00:15:36.094 "data_offset": 0, 00:15:36.094 "data_size": 0 00:15:36.094 }, 00:15:36.094 { 00:15:36.094 "name": "BaseBdev4", 00:15:36.094 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:36.094 "is_configured": false, 00:15:36.094 "data_offset": 0, 00:15:36.094 "data_size": 0 00:15:36.094 } 00:15:36.094 ] 00:15:36.094 }' 00:15:36.094 13:35:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:36.094 13:35:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:36.663 13:35:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:36.663 13:35:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.663 13:35:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:36.663 [2024-11-20 13:35:35.894391] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:36.663 BaseBdev3 00:15:36.663 13:35:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.663 13:35:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:15:36.663 13:35:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:15:36.663 13:35:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:36.663 13:35:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:36.663 13:35:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:36.663 13:35:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:36.663 13:35:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:36.663 13:35:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.663 13:35:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:36.663 13:35:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.663 13:35:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:36.663 13:35:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.663 13:35:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:36.663 [ 00:15:36.663 { 00:15:36.663 "name": "BaseBdev3", 00:15:36.663 "aliases": [ 00:15:36.663 "2521162f-c379-4756-a735-c70d0ffca697" 00:15:36.663 ], 00:15:36.663 "product_name": "Malloc disk", 00:15:36.663 "block_size": 512, 00:15:36.663 "num_blocks": 65536, 00:15:36.663 "uuid": "2521162f-c379-4756-a735-c70d0ffca697", 00:15:36.663 "assigned_rate_limits": { 00:15:36.663 "rw_ios_per_sec": 0, 00:15:36.663 "rw_mbytes_per_sec": 0, 00:15:36.663 "r_mbytes_per_sec": 0, 00:15:36.663 "w_mbytes_per_sec": 0 00:15:36.663 }, 00:15:36.663 "claimed": true, 00:15:36.663 "claim_type": "exclusive_write", 00:15:36.663 "zoned": false, 00:15:36.663 "supported_io_types": { 00:15:36.663 "read": true, 00:15:36.663 "write": true, 00:15:36.663 "unmap": true, 00:15:36.663 "flush": true, 00:15:36.663 "reset": true, 00:15:36.663 "nvme_admin": false, 00:15:36.663 "nvme_io": false, 00:15:36.664 "nvme_io_md": false, 00:15:36.664 "write_zeroes": true, 00:15:36.664 "zcopy": true, 00:15:36.664 "get_zone_info": false, 00:15:36.664 "zone_management": false, 00:15:36.664 "zone_append": false, 00:15:36.664 "compare": false, 00:15:36.664 "compare_and_write": false, 00:15:36.664 "abort": true, 00:15:36.664 "seek_hole": false, 00:15:36.664 "seek_data": false, 00:15:36.664 "copy": true, 00:15:36.664 "nvme_iov_md": false 00:15:36.664 }, 00:15:36.664 "memory_domains": [ 00:15:36.664 { 00:15:36.664 "dma_device_id": "system", 00:15:36.664 "dma_device_type": 1 00:15:36.664 }, 00:15:36.664 { 00:15:36.664 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:36.664 "dma_device_type": 2 00:15:36.664 } 00:15:36.664 ], 00:15:36.664 "driver_specific": {} 00:15:36.664 } 00:15:36.664 ] 00:15:36.664 13:35:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.664 13:35:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:36.664 13:35:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:36.664 13:35:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:36.664 13:35:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:15:36.664 13:35:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:36.664 13:35:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:36.664 13:35:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:36.664 13:35:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:36.664 13:35:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:36.664 13:35:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:36.664 13:35:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:36.664 13:35:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:36.664 13:35:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:36.664 13:35:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:36.664 13:35:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:36.664 13:35:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.664 13:35:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:36.664 13:35:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.664 13:35:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:36.664 "name": "Existed_Raid", 00:15:36.664 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:36.664 "strip_size_kb": 64, 00:15:36.664 "state": "configuring", 00:15:36.664 "raid_level": "raid0", 00:15:36.664 "superblock": false, 00:15:36.664 "num_base_bdevs": 4, 00:15:36.664 "num_base_bdevs_discovered": 3, 00:15:36.664 "num_base_bdevs_operational": 4, 00:15:36.664 "base_bdevs_list": [ 00:15:36.664 { 00:15:36.664 "name": "BaseBdev1", 00:15:36.664 "uuid": "f500dc54-a512-4d65-8e99-abfacb91e25f", 00:15:36.664 "is_configured": true, 00:15:36.664 "data_offset": 0, 00:15:36.664 "data_size": 65536 00:15:36.664 }, 00:15:36.664 { 00:15:36.664 "name": "BaseBdev2", 00:15:36.664 "uuid": "46608338-c908-4816-a030-8023a6eb5b6d", 00:15:36.664 "is_configured": true, 00:15:36.664 "data_offset": 0, 00:15:36.664 "data_size": 65536 00:15:36.664 }, 00:15:36.664 { 00:15:36.664 "name": "BaseBdev3", 00:15:36.664 "uuid": "2521162f-c379-4756-a735-c70d0ffca697", 00:15:36.664 "is_configured": true, 00:15:36.664 "data_offset": 0, 00:15:36.664 "data_size": 65536 00:15:36.664 }, 00:15:36.664 { 00:15:36.664 "name": "BaseBdev4", 00:15:36.664 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:36.664 "is_configured": false, 00:15:36.664 "data_offset": 0, 00:15:36.664 "data_size": 0 00:15:36.664 } 00:15:36.664 ] 00:15:36.664 }' 00:15:36.664 13:35:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:36.664 13:35:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.243 13:35:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:15:37.243 13:35:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.243 13:35:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.243 [2024-11-20 13:35:36.464984] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:37.243 [2024-11-20 13:35:36.465037] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:15:37.243 [2024-11-20 13:35:36.465049] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:15:37.243 [2024-11-20 13:35:36.465395] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:15:37.243 [2024-11-20 13:35:36.465602] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:15:37.243 [2024-11-20 13:35:36.465625] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:15:37.243 [2024-11-20 13:35:36.465904] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:37.243 BaseBdev4 00:15:37.243 13:35:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.243 13:35:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:15:37.243 13:35:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:15:37.243 13:35:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:37.243 13:35:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:37.243 13:35:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:37.243 13:35:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:37.243 13:35:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:37.244 13:35:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.244 13:35:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.244 13:35:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.244 13:35:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:15:37.244 13:35:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.244 13:35:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.244 [ 00:15:37.244 { 00:15:37.244 "name": "BaseBdev4", 00:15:37.244 "aliases": [ 00:15:37.244 "71623dfb-3940-4965-8dd8-852a0eacbc1f" 00:15:37.244 ], 00:15:37.244 "product_name": "Malloc disk", 00:15:37.244 "block_size": 512, 00:15:37.244 "num_blocks": 65536, 00:15:37.244 "uuid": "71623dfb-3940-4965-8dd8-852a0eacbc1f", 00:15:37.244 "assigned_rate_limits": { 00:15:37.244 "rw_ios_per_sec": 0, 00:15:37.244 "rw_mbytes_per_sec": 0, 00:15:37.244 "r_mbytes_per_sec": 0, 00:15:37.244 "w_mbytes_per_sec": 0 00:15:37.244 }, 00:15:37.244 "claimed": true, 00:15:37.244 "claim_type": "exclusive_write", 00:15:37.244 "zoned": false, 00:15:37.244 "supported_io_types": { 00:15:37.244 "read": true, 00:15:37.244 "write": true, 00:15:37.244 "unmap": true, 00:15:37.244 "flush": true, 00:15:37.244 "reset": true, 00:15:37.244 "nvme_admin": false, 00:15:37.244 "nvme_io": false, 00:15:37.244 "nvme_io_md": false, 00:15:37.244 "write_zeroes": true, 00:15:37.244 "zcopy": true, 00:15:37.244 "get_zone_info": false, 00:15:37.244 "zone_management": false, 00:15:37.244 "zone_append": false, 00:15:37.244 "compare": false, 00:15:37.244 "compare_and_write": false, 00:15:37.244 "abort": true, 00:15:37.244 "seek_hole": false, 00:15:37.244 "seek_data": false, 00:15:37.244 "copy": true, 00:15:37.244 "nvme_iov_md": false 00:15:37.244 }, 00:15:37.244 "memory_domains": [ 00:15:37.244 { 00:15:37.244 "dma_device_id": "system", 00:15:37.244 "dma_device_type": 1 00:15:37.244 }, 00:15:37.244 { 00:15:37.244 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:37.244 "dma_device_type": 2 00:15:37.244 } 00:15:37.244 ], 00:15:37.244 "driver_specific": {} 00:15:37.244 } 00:15:37.244 ] 00:15:37.244 13:35:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.244 13:35:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:37.244 13:35:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:37.244 13:35:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:37.244 13:35:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:15:37.244 13:35:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:37.244 13:35:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:37.244 13:35:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:37.244 13:35:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:37.244 13:35:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:37.244 13:35:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:37.244 13:35:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:37.244 13:35:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:37.244 13:35:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:37.244 13:35:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:37.244 13:35:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.244 13:35:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.244 13:35:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:37.244 13:35:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.244 13:35:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:37.244 "name": "Existed_Raid", 00:15:37.244 "uuid": "8087689b-03c1-4f63-b4c5-b17f23dfec24", 00:15:37.244 "strip_size_kb": 64, 00:15:37.244 "state": "online", 00:15:37.244 "raid_level": "raid0", 00:15:37.244 "superblock": false, 00:15:37.244 "num_base_bdevs": 4, 00:15:37.244 "num_base_bdevs_discovered": 4, 00:15:37.244 "num_base_bdevs_operational": 4, 00:15:37.244 "base_bdevs_list": [ 00:15:37.244 { 00:15:37.244 "name": "BaseBdev1", 00:15:37.244 "uuid": "f500dc54-a512-4d65-8e99-abfacb91e25f", 00:15:37.244 "is_configured": true, 00:15:37.244 "data_offset": 0, 00:15:37.244 "data_size": 65536 00:15:37.244 }, 00:15:37.244 { 00:15:37.244 "name": "BaseBdev2", 00:15:37.244 "uuid": "46608338-c908-4816-a030-8023a6eb5b6d", 00:15:37.244 "is_configured": true, 00:15:37.244 "data_offset": 0, 00:15:37.244 "data_size": 65536 00:15:37.244 }, 00:15:37.244 { 00:15:37.244 "name": "BaseBdev3", 00:15:37.244 "uuid": "2521162f-c379-4756-a735-c70d0ffca697", 00:15:37.244 "is_configured": true, 00:15:37.244 "data_offset": 0, 00:15:37.244 "data_size": 65536 00:15:37.244 }, 00:15:37.244 { 00:15:37.244 "name": "BaseBdev4", 00:15:37.244 "uuid": "71623dfb-3940-4965-8dd8-852a0eacbc1f", 00:15:37.244 "is_configured": true, 00:15:37.244 "data_offset": 0, 00:15:37.244 "data_size": 65536 00:15:37.244 } 00:15:37.244 ] 00:15:37.244 }' 00:15:37.244 13:35:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:37.244 13:35:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.503 13:35:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:15:37.503 13:35:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:37.503 13:35:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:37.503 13:35:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:37.503 13:35:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:37.503 13:35:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:37.503 13:35:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:37.503 13:35:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.503 13:35:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:37.504 13:35:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.504 [2024-11-20 13:35:36.948772] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:37.504 13:35:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.763 13:35:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:37.763 "name": "Existed_Raid", 00:15:37.763 "aliases": [ 00:15:37.763 "8087689b-03c1-4f63-b4c5-b17f23dfec24" 00:15:37.763 ], 00:15:37.763 "product_name": "Raid Volume", 00:15:37.763 "block_size": 512, 00:15:37.763 "num_blocks": 262144, 00:15:37.763 "uuid": "8087689b-03c1-4f63-b4c5-b17f23dfec24", 00:15:37.763 "assigned_rate_limits": { 00:15:37.763 "rw_ios_per_sec": 0, 00:15:37.763 "rw_mbytes_per_sec": 0, 00:15:37.763 "r_mbytes_per_sec": 0, 00:15:37.763 "w_mbytes_per_sec": 0 00:15:37.763 }, 00:15:37.763 "claimed": false, 00:15:37.763 "zoned": false, 00:15:37.763 "supported_io_types": { 00:15:37.763 "read": true, 00:15:37.763 "write": true, 00:15:37.763 "unmap": true, 00:15:37.763 "flush": true, 00:15:37.763 "reset": true, 00:15:37.763 "nvme_admin": false, 00:15:37.763 "nvme_io": false, 00:15:37.763 "nvme_io_md": false, 00:15:37.763 "write_zeroes": true, 00:15:37.763 "zcopy": false, 00:15:37.763 "get_zone_info": false, 00:15:37.763 "zone_management": false, 00:15:37.763 "zone_append": false, 00:15:37.763 "compare": false, 00:15:37.763 "compare_and_write": false, 00:15:37.763 "abort": false, 00:15:37.763 "seek_hole": false, 00:15:37.763 "seek_data": false, 00:15:37.763 "copy": false, 00:15:37.763 "nvme_iov_md": false 00:15:37.763 }, 00:15:37.763 "memory_domains": [ 00:15:37.763 { 00:15:37.763 "dma_device_id": "system", 00:15:37.763 "dma_device_type": 1 00:15:37.763 }, 00:15:37.763 { 00:15:37.763 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:37.763 "dma_device_type": 2 00:15:37.763 }, 00:15:37.763 { 00:15:37.763 "dma_device_id": "system", 00:15:37.763 "dma_device_type": 1 00:15:37.763 }, 00:15:37.763 { 00:15:37.763 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:37.763 "dma_device_type": 2 00:15:37.763 }, 00:15:37.763 { 00:15:37.763 "dma_device_id": "system", 00:15:37.763 "dma_device_type": 1 00:15:37.763 }, 00:15:37.763 { 00:15:37.763 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:37.763 "dma_device_type": 2 00:15:37.763 }, 00:15:37.763 { 00:15:37.763 "dma_device_id": "system", 00:15:37.763 "dma_device_type": 1 00:15:37.763 }, 00:15:37.763 { 00:15:37.763 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:37.763 "dma_device_type": 2 00:15:37.763 } 00:15:37.763 ], 00:15:37.763 "driver_specific": { 00:15:37.763 "raid": { 00:15:37.763 "uuid": "8087689b-03c1-4f63-b4c5-b17f23dfec24", 00:15:37.763 "strip_size_kb": 64, 00:15:37.763 "state": "online", 00:15:37.763 "raid_level": "raid0", 00:15:37.763 "superblock": false, 00:15:37.763 "num_base_bdevs": 4, 00:15:37.763 "num_base_bdevs_discovered": 4, 00:15:37.763 "num_base_bdevs_operational": 4, 00:15:37.763 "base_bdevs_list": [ 00:15:37.763 { 00:15:37.763 "name": "BaseBdev1", 00:15:37.763 "uuid": "f500dc54-a512-4d65-8e99-abfacb91e25f", 00:15:37.763 "is_configured": true, 00:15:37.763 "data_offset": 0, 00:15:37.763 "data_size": 65536 00:15:37.763 }, 00:15:37.763 { 00:15:37.763 "name": "BaseBdev2", 00:15:37.763 "uuid": "46608338-c908-4816-a030-8023a6eb5b6d", 00:15:37.763 "is_configured": true, 00:15:37.763 "data_offset": 0, 00:15:37.763 "data_size": 65536 00:15:37.763 }, 00:15:37.763 { 00:15:37.763 "name": "BaseBdev3", 00:15:37.763 "uuid": "2521162f-c379-4756-a735-c70d0ffca697", 00:15:37.763 "is_configured": true, 00:15:37.763 "data_offset": 0, 00:15:37.763 "data_size": 65536 00:15:37.763 }, 00:15:37.763 { 00:15:37.763 "name": "BaseBdev4", 00:15:37.763 "uuid": "71623dfb-3940-4965-8dd8-852a0eacbc1f", 00:15:37.763 "is_configured": true, 00:15:37.763 "data_offset": 0, 00:15:37.763 "data_size": 65536 00:15:37.763 } 00:15:37.763 ] 00:15:37.763 } 00:15:37.763 } 00:15:37.763 }' 00:15:37.763 13:35:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:37.763 13:35:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:15:37.763 BaseBdev2 00:15:37.763 BaseBdev3 00:15:37.763 BaseBdev4' 00:15:37.763 13:35:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:37.763 13:35:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:37.763 13:35:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:37.763 13:35:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:15:37.763 13:35:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:37.763 13:35:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.763 13:35:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.763 13:35:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.763 13:35:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:37.763 13:35:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:37.763 13:35:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:37.763 13:35:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:37.763 13:35:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.763 13:35:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.764 13:35:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:37.764 13:35:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.764 13:35:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:37.764 13:35:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:37.764 13:35:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:37.764 13:35:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:37.764 13:35:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.764 13:35:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.764 13:35:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:37.764 13:35:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.764 13:35:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:37.764 13:35:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:37.764 13:35:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:38.027 13:35:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:15:38.027 13:35:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.027 13:35:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.027 13:35:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:38.027 13:35:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.027 13:35:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:38.027 13:35:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:38.027 13:35:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:38.027 13:35:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.027 13:35:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.027 [2024-11-20 13:35:37.303988] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:38.027 [2024-11-20 13:35:37.304027] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:38.027 [2024-11-20 13:35:37.304100] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:38.027 13:35:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.027 13:35:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:15:38.027 13:35:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:15:38.027 13:35:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:38.027 13:35:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:15:38.027 13:35:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:15:38.027 13:35:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:15:38.027 13:35:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:38.027 13:35:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:15:38.027 13:35:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:38.027 13:35:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:38.027 13:35:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:38.027 13:35:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:38.027 13:35:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:38.027 13:35:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:38.027 13:35:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:38.027 13:35:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:38.027 13:35:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.027 13:35:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:38.027 13:35:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.027 13:35:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.027 13:35:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:38.027 "name": "Existed_Raid", 00:15:38.027 "uuid": "8087689b-03c1-4f63-b4c5-b17f23dfec24", 00:15:38.027 "strip_size_kb": 64, 00:15:38.027 "state": "offline", 00:15:38.027 "raid_level": "raid0", 00:15:38.027 "superblock": false, 00:15:38.027 "num_base_bdevs": 4, 00:15:38.027 "num_base_bdevs_discovered": 3, 00:15:38.027 "num_base_bdevs_operational": 3, 00:15:38.027 "base_bdevs_list": [ 00:15:38.027 { 00:15:38.027 "name": null, 00:15:38.027 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:38.027 "is_configured": false, 00:15:38.027 "data_offset": 0, 00:15:38.027 "data_size": 65536 00:15:38.027 }, 00:15:38.027 { 00:15:38.027 "name": "BaseBdev2", 00:15:38.027 "uuid": "46608338-c908-4816-a030-8023a6eb5b6d", 00:15:38.027 "is_configured": true, 00:15:38.027 "data_offset": 0, 00:15:38.027 "data_size": 65536 00:15:38.027 }, 00:15:38.027 { 00:15:38.027 "name": "BaseBdev3", 00:15:38.027 "uuid": "2521162f-c379-4756-a735-c70d0ffca697", 00:15:38.027 "is_configured": true, 00:15:38.027 "data_offset": 0, 00:15:38.027 "data_size": 65536 00:15:38.027 }, 00:15:38.027 { 00:15:38.027 "name": "BaseBdev4", 00:15:38.027 "uuid": "71623dfb-3940-4965-8dd8-852a0eacbc1f", 00:15:38.027 "is_configured": true, 00:15:38.027 "data_offset": 0, 00:15:38.027 "data_size": 65536 00:15:38.027 } 00:15:38.027 ] 00:15:38.027 }' 00:15:38.027 13:35:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:38.027 13:35:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.595 13:35:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:15:38.595 13:35:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:38.595 13:35:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:38.595 13:35:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:38.595 13:35:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.595 13:35:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.595 13:35:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.595 13:35:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:38.595 13:35:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:38.595 13:35:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:15:38.595 13:35:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.595 13:35:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.595 [2024-11-20 13:35:37.979882] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:38.853 13:35:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.853 13:35:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:38.853 13:35:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:38.853 13:35:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:38.853 13:35:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:38.853 13:35:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.853 13:35:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.853 13:35:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.853 13:35:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:38.853 13:35:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:38.853 13:35:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:15:38.853 13:35:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.853 13:35:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.853 [2024-11-20 13:35:38.140039] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:38.853 13:35:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.853 13:35:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:38.853 13:35:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:38.853 13:35:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:38.853 13:35:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.853 13:35:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.853 13:35:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:38.853 13:35:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.853 13:35:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:38.853 13:35:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:38.853 13:35:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:15:38.853 13:35:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.853 13:35:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.853 [2024-11-20 13:35:38.304937] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:15:38.853 [2024-11-20 13:35:38.305015] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:15:39.113 13:35:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.113 13:35:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:39.113 13:35:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:39.113 13:35:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:39.113 13:35:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:15:39.113 13:35:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.113 13:35:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:39.113 13:35:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.113 13:35:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:15:39.113 13:35:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:15:39.113 13:35:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:15:39.113 13:35:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:15:39.113 13:35:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:39.113 13:35:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:39.113 13:35:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.113 13:35:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:39.113 BaseBdev2 00:15:39.113 13:35:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.113 13:35:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:15:39.113 13:35:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:15:39.113 13:35:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:39.113 13:35:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:39.113 13:35:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:39.113 13:35:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:39.113 13:35:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:39.113 13:35:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.113 13:35:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:39.113 13:35:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.113 13:35:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:39.113 13:35:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.113 13:35:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:39.113 [ 00:15:39.113 { 00:15:39.113 "name": "BaseBdev2", 00:15:39.113 "aliases": [ 00:15:39.113 "85ae228a-dc80-4d55-851d-bff0d02f7ad8" 00:15:39.113 ], 00:15:39.113 "product_name": "Malloc disk", 00:15:39.113 "block_size": 512, 00:15:39.113 "num_blocks": 65536, 00:15:39.113 "uuid": "85ae228a-dc80-4d55-851d-bff0d02f7ad8", 00:15:39.113 "assigned_rate_limits": { 00:15:39.113 "rw_ios_per_sec": 0, 00:15:39.113 "rw_mbytes_per_sec": 0, 00:15:39.113 "r_mbytes_per_sec": 0, 00:15:39.113 "w_mbytes_per_sec": 0 00:15:39.113 }, 00:15:39.113 "claimed": false, 00:15:39.113 "zoned": false, 00:15:39.113 "supported_io_types": { 00:15:39.113 "read": true, 00:15:39.113 "write": true, 00:15:39.113 "unmap": true, 00:15:39.113 "flush": true, 00:15:39.113 "reset": true, 00:15:39.113 "nvme_admin": false, 00:15:39.113 "nvme_io": false, 00:15:39.113 "nvme_io_md": false, 00:15:39.113 "write_zeroes": true, 00:15:39.113 "zcopy": true, 00:15:39.113 "get_zone_info": false, 00:15:39.113 "zone_management": false, 00:15:39.113 "zone_append": false, 00:15:39.113 "compare": false, 00:15:39.113 "compare_and_write": false, 00:15:39.113 "abort": true, 00:15:39.113 "seek_hole": false, 00:15:39.113 "seek_data": false, 00:15:39.113 "copy": true, 00:15:39.113 "nvme_iov_md": false 00:15:39.113 }, 00:15:39.113 "memory_domains": [ 00:15:39.113 { 00:15:39.113 "dma_device_id": "system", 00:15:39.113 "dma_device_type": 1 00:15:39.113 }, 00:15:39.113 { 00:15:39.113 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:39.113 "dma_device_type": 2 00:15:39.113 } 00:15:39.113 ], 00:15:39.113 "driver_specific": {} 00:15:39.113 } 00:15:39.113 ] 00:15:39.114 13:35:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.114 13:35:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:39.114 13:35:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:39.114 13:35:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:39.114 13:35:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:39.114 13:35:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.114 13:35:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:39.373 BaseBdev3 00:15:39.373 13:35:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.373 13:35:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:15:39.373 13:35:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:15:39.373 13:35:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:39.373 13:35:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:39.373 13:35:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:39.373 13:35:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:39.373 13:35:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:39.373 13:35:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.373 13:35:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:39.373 13:35:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.373 13:35:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:39.373 13:35:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.373 13:35:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:39.373 [ 00:15:39.373 { 00:15:39.373 "name": "BaseBdev3", 00:15:39.373 "aliases": [ 00:15:39.373 "f39cd5b8-b758-40a1-a745-23cdbce1d7b8" 00:15:39.373 ], 00:15:39.373 "product_name": "Malloc disk", 00:15:39.373 "block_size": 512, 00:15:39.373 "num_blocks": 65536, 00:15:39.373 "uuid": "f39cd5b8-b758-40a1-a745-23cdbce1d7b8", 00:15:39.373 "assigned_rate_limits": { 00:15:39.373 "rw_ios_per_sec": 0, 00:15:39.373 "rw_mbytes_per_sec": 0, 00:15:39.373 "r_mbytes_per_sec": 0, 00:15:39.373 "w_mbytes_per_sec": 0 00:15:39.373 }, 00:15:39.373 "claimed": false, 00:15:39.373 "zoned": false, 00:15:39.373 "supported_io_types": { 00:15:39.373 "read": true, 00:15:39.373 "write": true, 00:15:39.373 "unmap": true, 00:15:39.373 "flush": true, 00:15:39.373 "reset": true, 00:15:39.373 "nvme_admin": false, 00:15:39.373 "nvme_io": false, 00:15:39.373 "nvme_io_md": false, 00:15:39.373 "write_zeroes": true, 00:15:39.373 "zcopy": true, 00:15:39.373 "get_zone_info": false, 00:15:39.373 "zone_management": false, 00:15:39.373 "zone_append": false, 00:15:39.373 "compare": false, 00:15:39.373 "compare_and_write": false, 00:15:39.373 "abort": true, 00:15:39.373 "seek_hole": false, 00:15:39.373 "seek_data": false, 00:15:39.373 "copy": true, 00:15:39.373 "nvme_iov_md": false 00:15:39.373 }, 00:15:39.373 "memory_domains": [ 00:15:39.373 { 00:15:39.373 "dma_device_id": "system", 00:15:39.373 "dma_device_type": 1 00:15:39.373 }, 00:15:39.373 { 00:15:39.373 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:39.373 "dma_device_type": 2 00:15:39.373 } 00:15:39.373 ], 00:15:39.373 "driver_specific": {} 00:15:39.373 } 00:15:39.373 ] 00:15:39.373 13:35:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.373 13:35:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:39.373 13:35:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:39.373 13:35:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:39.373 13:35:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:15:39.373 13:35:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.373 13:35:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:39.373 BaseBdev4 00:15:39.373 13:35:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.373 13:35:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:15:39.373 13:35:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:15:39.373 13:35:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:39.373 13:35:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:39.373 13:35:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:39.373 13:35:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:39.373 13:35:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:39.373 13:35:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.373 13:35:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:39.373 13:35:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.373 13:35:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:15:39.373 13:35:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.373 13:35:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:39.373 [ 00:15:39.373 { 00:15:39.373 "name": "BaseBdev4", 00:15:39.373 "aliases": [ 00:15:39.373 "070d2930-5c14-4baf-87d5-951a3680d1c7" 00:15:39.373 ], 00:15:39.373 "product_name": "Malloc disk", 00:15:39.373 "block_size": 512, 00:15:39.373 "num_blocks": 65536, 00:15:39.373 "uuid": "070d2930-5c14-4baf-87d5-951a3680d1c7", 00:15:39.373 "assigned_rate_limits": { 00:15:39.373 "rw_ios_per_sec": 0, 00:15:39.373 "rw_mbytes_per_sec": 0, 00:15:39.373 "r_mbytes_per_sec": 0, 00:15:39.373 "w_mbytes_per_sec": 0 00:15:39.373 }, 00:15:39.373 "claimed": false, 00:15:39.373 "zoned": false, 00:15:39.373 "supported_io_types": { 00:15:39.373 "read": true, 00:15:39.373 "write": true, 00:15:39.374 "unmap": true, 00:15:39.374 "flush": true, 00:15:39.374 "reset": true, 00:15:39.374 "nvme_admin": false, 00:15:39.374 "nvme_io": false, 00:15:39.374 "nvme_io_md": false, 00:15:39.374 "write_zeroes": true, 00:15:39.374 "zcopy": true, 00:15:39.374 "get_zone_info": false, 00:15:39.374 "zone_management": false, 00:15:39.374 "zone_append": false, 00:15:39.374 "compare": false, 00:15:39.374 "compare_and_write": false, 00:15:39.374 "abort": true, 00:15:39.374 "seek_hole": false, 00:15:39.374 "seek_data": false, 00:15:39.374 "copy": true, 00:15:39.374 "nvme_iov_md": false 00:15:39.374 }, 00:15:39.374 "memory_domains": [ 00:15:39.374 { 00:15:39.374 "dma_device_id": "system", 00:15:39.374 "dma_device_type": 1 00:15:39.374 }, 00:15:39.374 { 00:15:39.374 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:39.374 "dma_device_type": 2 00:15:39.374 } 00:15:39.374 ], 00:15:39.374 "driver_specific": {} 00:15:39.374 } 00:15:39.374 ] 00:15:39.374 13:35:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.374 13:35:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:39.374 13:35:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:39.374 13:35:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:39.374 13:35:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:39.374 13:35:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.374 13:35:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:39.374 [2024-11-20 13:35:38.768968] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:39.374 [2024-11-20 13:35:38.769043] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:39.374 [2024-11-20 13:35:38.769094] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:39.374 [2024-11-20 13:35:38.771440] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:39.374 [2024-11-20 13:35:38.771500] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:39.374 13:35:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.374 13:35:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:15:39.374 13:35:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:39.374 13:35:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:39.374 13:35:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:39.374 13:35:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:39.374 13:35:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:39.374 13:35:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:39.374 13:35:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:39.374 13:35:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:39.374 13:35:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:39.374 13:35:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:39.374 13:35:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:39.374 13:35:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.374 13:35:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:39.374 13:35:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.374 13:35:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:39.374 "name": "Existed_Raid", 00:15:39.374 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:39.374 "strip_size_kb": 64, 00:15:39.374 "state": "configuring", 00:15:39.374 "raid_level": "raid0", 00:15:39.374 "superblock": false, 00:15:39.374 "num_base_bdevs": 4, 00:15:39.374 "num_base_bdevs_discovered": 3, 00:15:39.374 "num_base_bdevs_operational": 4, 00:15:39.374 "base_bdevs_list": [ 00:15:39.374 { 00:15:39.374 "name": "BaseBdev1", 00:15:39.374 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:39.374 "is_configured": false, 00:15:39.374 "data_offset": 0, 00:15:39.374 "data_size": 0 00:15:39.374 }, 00:15:39.374 { 00:15:39.374 "name": "BaseBdev2", 00:15:39.374 "uuid": "85ae228a-dc80-4d55-851d-bff0d02f7ad8", 00:15:39.374 "is_configured": true, 00:15:39.374 "data_offset": 0, 00:15:39.374 "data_size": 65536 00:15:39.374 }, 00:15:39.374 { 00:15:39.374 "name": "BaseBdev3", 00:15:39.374 "uuid": "f39cd5b8-b758-40a1-a745-23cdbce1d7b8", 00:15:39.374 "is_configured": true, 00:15:39.374 "data_offset": 0, 00:15:39.374 "data_size": 65536 00:15:39.374 }, 00:15:39.374 { 00:15:39.374 "name": "BaseBdev4", 00:15:39.374 "uuid": "070d2930-5c14-4baf-87d5-951a3680d1c7", 00:15:39.374 "is_configured": true, 00:15:39.374 "data_offset": 0, 00:15:39.374 "data_size": 65536 00:15:39.374 } 00:15:39.374 ] 00:15:39.374 }' 00:15:39.374 13:35:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:39.374 13:35:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:39.942 13:35:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:15:39.942 13:35:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.942 13:35:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:39.942 [2024-11-20 13:35:39.240330] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:39.942 13:35:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.942 13:35:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:15:39.942 13:35:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:39.942 13:35:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:39.942 13:35:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:39.942 13:35:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:39.942 13:35:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:39.942 13:35:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:39.942 13:35:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:39.942 13:35:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:39.942 13:35:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:39.942 13:35:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:39.942 13:35:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:39.942 13:35:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.942 13:35:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:39.942 13:35:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.942 13:35:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:39.942 "name": "Existed_Raid", 00:15:39.942 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:39.942 "strip_size_kb": 64, 00:15:39.942 "state": "configuring", 00:15:39.942 "raid_level": "raid0", 00:15:39.942 "superblock": false, 00:15:39.942 "num_base_bdevs": 4, 00:15:39.942 "num_base_bdevs_discovered": 2, 00:15:39.942 "num_base_bdevs_operational": 4, 00:15:39.942 "base_bdevs_list": [ 00:15:39.942 { 00:15:39.942 "name": "BaseBdev1", 00:15:39.942 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:39.942 "is_configured": false, 00:15:39.942 "data_offset": 0, 00:15:39.942 "data_size": 0 00:15:39.942 }, 00:15:39.942 { 00:15:39.942 "name": null, 00:15:39.942 "uuid": "85ae228a-dc80-4d55-851d-bff0d02f7ad8", 00:15:39.942 "is_configured": false, 00:15:39.942 "data_offset": 0, 00:15:39.942 "data_size": 65536 00:15:39.942 }, 00:15:39.942 { 00:15:39.942 "name": "BaseBdev3", 00:15:39.942 "uuid": "f39cd5b8-b758-40a1-a745-23cdbce1d7b8", 00:15:39.942 "is_configured": true, 00:15:39.942 "data_offset": 0, 00:15:39.942 "data_size": 65536 00:15:39.942 }, 00:15:39.942 { 00:15:39.942 "name": "BaseBdev4", 00:15:39.942 "uuid": "070d2930-5c14-4baf-87d5-951a3680d1c7", 00:15:39.942 "is_configured": true, 00:15:39.942 "data_offset": 0, 00:15:39.942 "data_size": 65536 00:15:39.942 } 00:15:39.942 ] 00:15:39.942 }' 00:15:39.942 13:35:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:39.942 13:35:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:40.201 13:35:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:40.201 13:35:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:40.201 13:35:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.201 13:35:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:40.460 13:35:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.460 13:35:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:15:40.460 13:35:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:40.460 13:35:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.460 13:35:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:40.460 [2024-11-20 13:35:39.744591] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:40.460 BaseBdev1 00:15:40.460 13:35:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.460 13:35:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:15:40.460 13:35:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:15:40.460 13:35:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:40.460 13:35:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:40.460 13:35:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:40.460 13:35:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:40.460 13:35:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:40.460 13:35:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.460 13:35:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:40.460 13:35:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.460 13:35:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:40.460 13:35:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.460 13:35:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:40.460 [ 00:15:40.460 { 00:15:40.460 "name": "BaseBdev1", 00:15:40.460 "aliases": [ 00:15:40.460 "2ef57e72-a538-4497-909e-211221c2898b" 00:15:40.460 ], 00:15:40.460 "product_name": "Malloc disk", 00:15:40.460 "block_size": 512, 00:15:40.460 "num_blocks": 65536, 00:15:40.460 "uuid": "2ef57e72-a538-4497-909e-211221c2898b", 00:15:40.460 "assigned_rate_limits": { 00:15:40.460 "rw_ios_per_sec": 0, 00:15:40.460 "rw_mbytes_per_sec": 0, 00:15:40.460 "r_mbytes_per_sec": 0, 00:15:40.460 "w_mbytes_per_sec": 0 00:15:40.460 }, 00:15:40.460 "claimed": true, 00:15:40.460 "claim_type": "exclusive_write", 00:15:40.460 "zoned": false, 00:15:40.460 "supported_io_types": { 00:15:40.460 "read": true, 00:15:40.460 "write": true, 00:15:40.460 "unmap": true, 00:15:40.460 "flush": true, 00:15:40.460 "reset": true, 00:15:40.460 "nvme_admin": false, 00:15:40.460 "nvme_io": false, 00:15:40.460 "nvme_io_md": false, 00:15:40.460 "write_zeroes": true, 00:15:40.460 "zcopy": true, 00:15:40.460 "get_zone_info": false, 00:15:40.460 "zone_management": false, 00:15:40.460 "zone_append": false, 00:15:40.460 "compare": false, 00:15:40.460 "compare_and_write": false, 00:15:40.460 "abort": true, 00:15:40.460 "seek_hole": false, 00:15:40.460 "seek_data": false, 00:15:40.460 "copy": true, 00:15:40.460 "nvme_iov_md": false 00:15:40.460 }, 00:15:40.460 "memory_domains": [ 00:15:40.460 { 00:15:40.460 "dma_device_id": "system", 00:15:40.460 "dma_device_type": 1 00:15:40.460 }, 00:15:40.460 { 00:15:40.460 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:40.460 "dma_device_type": 2 00:15:40.460 } 00:15:40.460 ], 00:15:40.460 "driver_specific": {} 00:15:40.460 } 00:15:40.460 ] 00:15:40.460 13:35:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.460 13:35:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:40.460 13:35:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:15:40.460 13:35:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:40.460 13:35:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:40.460 13:35:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:40.460 13:35:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:40.460 13:35:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:40.460 13:35:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:40.460 13:35:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:40.460 13:35:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:40.460 13:35:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:40.460 13:35:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:40.460 13:35:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:40.460 13:35:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.460 13:35:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:40.460 13:35:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.460 13:35:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:40.460 "name": "Existed_Raid", 00:15:40.460 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:40.460 "strip_size_kb": 64, 00:15:40.460 "state": "configuring", 00:15:40.460 "raid_level": "raid0", 00:15:40.460 "superblock": false, 00:15:40.460 "num_base_bdevs": 4, 00:15:40.460 "num_base_bdevs_discovered": 3, 00:15:40.460 "num_base_bdevs_operational": 4, 00:15:40.460 "base_bdevs_list": [ 00:15:40.460 { 00:15:40.460 "name": "BaseBdev1", 00:15:40.460 "uuid": "2ef57e72-a538-4497-909e-211221c2898b", 00:15:40.460 "is_configured": true, 00:15:40.460 "data_offset": 0, 00:15:40.460 "data_size": 65536 00:15:40.460 }, 00:15:40.460 { 00:15:40.460 "name": null, 00:15:40.460 "uuid": "85ae228a-dc80-4d55-851d-bff0d02f7ad8", 00:15:40.460 "is_configured": false, 00:15:40.460 "data_offset": 0, 00:15:40.460 "data_size": 65536 00:15:40.460 }, 00:15:40.460 { 00:15:40.460 "name": "BaseBdev3", 00:15:40.461 "uuid": "f39cd5b8-b758-40a1-a745-23cdbce1d7b8", 00:15:40.461 "is_configured": true, 00:15:40.461 "data_offset": 0, 00:15:40.461 "data_size": 65536 00:15:40.461 }, 00:15:40.461 { 00:15:40.461 "name": "BaseBdev4", 00:15:40.461 "uuid": "070d2930-5c14-4baf-87d5-951a3680d1c7", 00:15:40.461 "is_configured": true, 00:15:40.461 "data_offset": 0, 00:15:40.461 "data_size": 65536 00:15:40.461 } 00:15:40.461 ] 00:15:40.461 }' 00:15:40.461 13:35:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:40.461 13:35:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:41.097 13:35:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:41.097 13:35:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:41.097 13:35:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.097 13:35:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:41.097 13:35:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.097 13:35:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:15:41.097 13:35:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:15:41.097 13:35:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.098 13:35:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:41.098 [2024-11-20 13:35:40.303988] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:41.098 13:35:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.098 13:35:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:15:41.098 13:35:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:41.098 13:35:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:41.098 13:35:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:41.098 13:35:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:41.098 13:35:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:41.098 13:35:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:41.098 13:35:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:41.098 13:35:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:41.098 13:35:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:41.098 13:35:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:41.098 13:35:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:41.098 13:35:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.098 13:35:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:41.098 13:35:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.098 13:35:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:41.098 "name": "Existed_Raid", 00:15:41.098 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:41.098 "strip_size_kb": 64, 00:15:41.098 "state": "configuring", 00:15:41.098 "raid_level": "raid0", 00:15:41.098 "superblock": false, 00:15:41.098 "num_base_bdevs": 4, 00:15:41.098 "num_base_bdevs_discovered": 2, 00:15:41.098 "num_base_bdevs_operational": 4, 00:15:41.098 "base_bdevs_list": [ 00:15:41.098 { 00:15:41.098 "name": "BaseBdev1", 00:15:41.098 "uuid": "2ef57e72-a538-4497-909e-211221c2898b", 00:15:41.098 "is_configured": true, 00:15:41.098 "data_offset": 0, 00:15:41.098 "data_size": 65536 00:15:41.098 }, 00:15:41.098 { 00:15:41.098 "name": null, 00:15:41.098 "uuid": "85ae228a-dc80-4d55-851d-bff0d02f7ad8", 00:15:41.098 "is_configured": false, 00:15:41.098 "data_offset": 0, 00:15:41.098 "data_size": 65536 00:15:41.098 }, 00:15:41.098 { 00:15:41.098 "name": null, 00:15:41.098 "uuid": "f39cd5b8-b758-40a1-a745-23cdbce1d7b8", 00:15:41.098 "is_configured": false, 00:15:41.098 "data_offset": 0, 00:15:41.098 "data_size": 65536 00:15:41.098 }, 00:15:41.098 { 00:15:41.098 "name": "BaseBdev4", 00:15:41.098 "uuid": "070d2930-5c14-4baf-87d5-951a3680d1c7", 00:15:41.098 "is_configured": true, 00:15:41.098 "data_offset": 0, 00:15:41.098 "data_size": 65536 00:15:41.098 } 00:15:41.098 ] 00:15:41.098 }' 00:15:41.098 13:35:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:41.098 13:35:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:41.357 13:35:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:41.357 13:35:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:41.357 13:35:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.357 13:35:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:41.357 13:35:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.357 13:35:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:15:41.357 13:35:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:15:41.357 13:35:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.357 13:35:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:41.357 [2024-11-20 13:35:40.791290] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:41.357 13:35:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.357 13:35:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:15:41.357 13:35:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:41.357 13:35:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:41.357 13:35:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:41.357 13:35:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:41.357 13:35:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:41.357 13:35:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:41.357 13:35:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:41.357 13:35:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:41.357 13:35:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:41.357 13:35:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:41.357 13:35:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:41.357 13:35:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.357 13:35:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:41.357 13:35:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.616 13:35:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:41.616 "name": "Existed_Raid", 00:15:41.616 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:41.616 "strip_size_kb": 64, 00:15:41.616 "state": "configuring", 00:15:41.616 "raid_level": "raid0", 00:15:41.616 "superblock": false, 00:15:41.616 "num_base_bdevs": 4, 00:15:41.616 "num_base_bdevs_discovered": 3, 00:15:41.616 "num_base_bdevs_operational": 4, 00:15:41.616 "base_bdevs_list": [ 00:15:41.616 { 00:15:41.616 "name": "BaseBdev1", 00:15:41.616 "uuid": "2ef57e72-a538-4497-909e-211221c2898b", 00:15:41.616 "is_configured": true, 00:15:41.616 "data_offset": 0, 00:15:41.616 "data_size": 65536 00:15:41.616 }, 00:15:41.616 { 00:15:41.616 "name": null, 00:15:41.616 "uuid": "85ae228a-dc80-4d55-851d-bff0d02f7ad8", 00:15:41.616 "is_configured": false, 00:15:41.616 "data_offset": 0, 00:15:41.616 "data_size": 65536 00:15:41.616 }, 00:15:41.616 { 00:15:41.616 "name": "BaseBdev3", 00:15:41.616 "uuid": "f39cd5b8-b758-40a1-a745-23cdbce1d7b8", 00:15:41.616 "is_configured": true, 00:15:41.616 "data_offset": 0, 00:15:41.616 "data_size": 65536 00:15:41.616 }, 00:15:41.616 { 00:15:41.616 "name": "BaseBdev4", 00:15:41.616 "uuid": "070d2930-5c14-4baf-87d5-951a3680d1c7", 00:15:41.616 "is_configured": true, 00:15:41.616 "data_offset": 0, 00:15:41.616 "data_size": 65536 00:15:41.616 } 00:15:41.616 ] 00:15:41.616 }' 00:15:41.616 13:35:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:41.616 13:35:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:41.874 13:35:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:41.874 13:35:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:41.874 13:35:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.874 13:35:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:41.874 13:35:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.874 13:35:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:15:41.874 13:35:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:41.874 13:35:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.874 13:35:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:41.874 [2024-11-20 13:35:41.254690] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:42.132 13:35:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.132 13:35:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:15:42.132 13:35:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:42.132 13:35:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:42.132 13:35:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:42.132 13:35:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:42.132 13:35:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:42.132 13:35:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:42.132 13:35:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:42.132 13:35:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:42.132 13:35:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:42.132 13:35:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:42.132 13:35:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:42.132 13:35:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.132 13:35:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:42.132 13:35:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.132 13:35:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:42.132 "name": "Existed_Raid", 00:15:42.132 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:42.132 "strip_size_kb": 64, 00:15:42.132 "state": "configuring", 00:15:42.132 "raid_level": "raid0", 00:15:42.132 "superblock": false, 00:15:42.132 "num_base_bdevs": 4, 00:15:42.132 "num_base_bdevs_discovered": 2, 00:15:42.132 "num_base_bdevs_operational": 4, 00:15:42.132 "base_bdevs_list": [ 00:15:42.132 { 00:15:42.132 "name": null, 00:15:42.132 "uuid": "2ef57e72-a538-4497-909e-211221c2898b", 00:15:42.132 "is_configured": false, 00:15:42.132 "data_offset": 0, 00:15:42.132 "data_size": 65536 00:15:42.132 }, 00:15:42.132 { 00:15:42.132 "name": null, 00:15:42.132 "uuid": "85ae228a-dc80-4d55-851d-bff0d02f7ad8", 00:15:42.132 "is_configured": false, 00:15:42.132 "data_offset": 0, 00:15:42.132 "data_size": 65536 00:15:42.132 }, 00:15:42.132 { 00:15:42.132 "name": "BaseBdev3", 00:15:42.132 "uuid": "f39cd5b8-b758-40a1-a745-23cdbce1d7b8", 00:15:42.132 "is_configured": true, 00:15:42.132 "data_offset": 0, 00:15:42.132 "data_size": 65536 00:15:42.132 }, 00:15:42.132 { 00:15:42.132 "name": "BaseBdev4", 00:15:42.132 "uuid": "070d2930-5c14-4baf-87d5-951a3680d1c7", 00:15:42.132 "is_configured": true, 00:15:42.132 "data_offset": 0, 00:15:42.132 "data_size": 65536 00:15:42.132 } 00:15:42.132 ] 00:15:42.132 }' 00:15:42.132 13:35:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:42.132 13:35:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:42.390 13:35:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:42.390 13:35:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.390 13:35:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:42.390 13:35:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:42.390 13:35:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.390 13:35:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:15:42.390 13:35:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:15:42.390 13:35:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.390 13:35:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:42.390 [2024-11-20 13:35:41.838507] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:42.390 13:35:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.390 13:35:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:15:42.390 13:35:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:42.390 13:35:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:42.390 13:35:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:42.390 13:35:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:42.390 13:35:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:42.390 13:35:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:42.390 13:35:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:42.390 13:35:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:42.390 13:35:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:42.390 13:35:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:42.390 13:35:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.390 13:35:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:42.390 13:35:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:42.649 13:35:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.649 13:35:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:42.649 "name": "Existed_Raid", 00:15:42.649 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:42.649 "strip_size_kb": 64, 00:15:42.649 "state": "configuring", 00:15:42.649 "raid_level": "raid0", 00:15:42.649 "superblock": false, 00:15:42.649 "num_base_bdevs": 4, 00:15:42.649 "num_base_bdevs_discovered": 3, 00:15:42.649 "num_base_bdevs_operational": 4, 00:15:42.649 "base_bdevs_list": [ 00:15:42.649 { 00:15:42.649 "name": null, 00:15:42.649 "uuid": "2ef57e72-a538-4497-909e-211221c2898b", 00:15:42.649 "is_configured": false, 00:15:42.649 "data_offset": 0, 00:15:42.649 "data_size": 65536 00:15:42.649 }, 00:15:42.649 { 00:15:42.649 "name": "BaseBdev2", 00:15:42.649 "uuid": "85ae228a-dc80-4d55-851d-bff0d02f7ad8", 00:15:42.649 "is_configured": true, 00:15:42.649 "data_offset": 0, 00:15:42.649 "data_size": 65536 00:15:42.649 }, 00:15:42.649 { 00:15:42.649 "name": "BaseBdev3", 00:15:42.649 "uuid": "f39cd5b8-b758-40a1-a745-23cdbce1d7b8", 00:15:42.649 "is_configured": true, 00:15:42.650 "data_offset": 0, 00:15:42.650 "data_size": 65536 00:15:42.650 }, 00:15:42.650 { 00:15:42.650 "name": "BaseBdev4", 00:15:42.650 "uuid": "070d2930-5c14-4baf-87d5-951a3680d1c7", 00:15:42.650 "is_configured": true, 00:15:42.650 "data_offset": 0, 00:15:42.650 "data_size": 65536 00:15:42.650 } 00:15:42.650 ] 00:15:42.650 }' 00:15:42.650 13:35:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:42.650 13:35:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:42.908 13:35:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:42.908 13:35:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:42.908 13:35:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.908 13:35:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:42.908 13:35:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.908 13:35:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:15:42.908 13:35:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:42.908 13:35:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:15:42.908 13:35:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.908 13:35:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:42.908 13:35:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.908 13:35:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 2ef57e72-a538-4497-909e-211221c2898b 00:15:42.908 13:35:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.908 13:35:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:43.166 [2024-11-20 13:35:42.415311] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:15:43.166 [2024-11-20 13:35:42.415381] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:15:43.166 [2024-11-20 13:35:42.415391] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:15:43.166 [2024-11-20 13:35:42.415694] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:15:43.166 [2024-11-20 13:35:42.415864] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:15:43.166 [2024-11-20 13:35:42.415877] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:15:43.166 [2024-11-20 13:35:42.416179] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:43.166 NewBaseBdev 00:15:43.166 13:35:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.166 13:35:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:15:43.166 13:35:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:15:43.166 13:35:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:43.166 13:35:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:43.166 13:35:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:43.166 13:35:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:43.166 13:35:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:43.166 13:35:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.167 13:35:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:43.167 13:35:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.167 13:35:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:15:43.167 13:35:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.167 13:35:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:43.167 [ 00:15:43.167 { 00:15:43.167 "name": "NewBaseBdev", 00:15:43.167 "aliases": [ 00:15:43.167 "2ef57e72-a538-4497-909e-211221c2898b" 00:15:43.167 ], 00:15:43.167 "product_name": "Malloc disk", 00:15:43.167 "block_size": 512, 00:15:43.167 "num_blocks": 65536, 00:15:43.167 "uuid": "2ef57e72-a538-4497-909e-211221c2898b", 00:15:43.167 "assigned_rate_limits": { 00:15:43.167 "rw_ios_per_sec": 0, 00:15:43.167 "rw_mbytes_per_sec": 0, 00:15:43.167 "r_mbytes_per_sec": 0, 00:15:43.167 "w_mbytes_per_sec": 0 00:15:43.167 }, 00:15:43.167 "claimed": true, 00:15:43.167 "claim_type": "exclusive_write", 00:15:43.167 "zoned": false, 00:15:43.167 "supported_io_types": { 00:15:43.167 "read": true, 00:15:43.167 "write": true, 00:15:43.167 "unmap": true, 00:15:43.167 "flush": true, 00:15:43.167 "reset": true, 00:15:43.167 "nvme_admin": false, 00:15:43.167 "nvme_io": false, 00:15:43.167 "nvme_io_md": false, 00:15:43.167 "write_zeroes": true, 00:15:43.167 "zcopy": true, 00:15:43.167 "get_zone_info": false, 00:15:43.167 "zone_management": false, 00:15:43.167 "zone_append": false, 00:15:43.167 "compare": false, 00:15:43.167 "compare_and_write": false, 00:15:43.167 "abort": true, 00:15:43.167 "seek_hole": false, 00:15:43.167 "seek_data": false, 00:15:43.167 "copy": true, 00:15:43.167 "nvme_iov_md": false 00:15:43.167 }, 00:15:43.167 "memory_domains": [ 00:15:43.167 { 00:15:43.167 "dma_device_id": "system", 00:15:43.167 "dma_device_type": 1 00:15:43.167 }, 00:15:43.167 { 00:15:43.167 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:43.167 "dma_device_type": 2 00:15:43.167 } 00:15:43.167 ], 00:15:43.167 "driver_specific": {} 00:15:43.167 } 00:15:43.167 ] 00:15:43.167 13:35:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.167 13:35:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:43.167 13:35:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:15:43.167 13:35:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:43.167 13:35:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:43.167 13:35:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:43.167 13:35:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:43.167 13:35:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:43.167 13:35:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:43.167 13:35:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:43.167 13:35:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:43.167 13:35:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:43.167 13:35:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:43.167 13:35:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:43.167 13:35:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.167 13:35:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:43.167 13:35:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.167 13:35:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:43.167 "name": "Existed_Raid", 00:15:43.167 "uuid": "4325e18d-0732-42b9-bba8-96f3c94cc8df", 00:15:43.167 "strip_size_kb": 64, 00:15:43.167 "state": "online", 00:15:43.167 "raid_level": "raid0", 00:15:43.167 "superblock": false, 00:15:43.167 "num_base_bdevs": 4, 00:15:43.167 "num_base_bdevs_discovered": 4, 00:15:43.167 "num_base_bdevs_operational": 4, 00:15:43.167 "base_bdevs_list": [ 00:15:43.167 { 00:15:43.167 "name": "NewBaseBdev", 00:15:43.167 "uuid": "2ef57e72-a538-4497-909e-211221c2898b", 00:15:43.167 "is_configured": true, 00:15:43.167 "data_offset": 0, 00:15:43.167 "data_size": 65536 00:15:43.167 }, 00:15:43.167 { 00:15:43.167 "name": "BaseBdev2", 00:15:43.167 "uuid": "85ae228a-dc80-4d55-851d-bff0d02f7ad8", 00:15:43.167 "is_configured": true, 00:15:43.167 "data_offset": 0, 00:15:43.167 "data_size": 65536 00:15:43.167 }, 00:15:43.167 { 00:15:43.167 "name": "BaseBdev3", 00:15:43.167 "uuid": "f39cd5b8-b758-40a1-a745-23cdbce1d7b8", 00:15:43.167 "is_configured": true, 00:15:43.167 "data_offset": 0, 00:15:43.167 "data_size": 65536 00:15:43.167 }, 00:15:43.167 { 00:15:43.167 "name": "BaseBdev4", 00:15:43.167 "uuid": "070d2930-5c14-4baf-87d5-951a3680d1c7", 00:15:43.167 "is_configured": true, 00:15:43.167 "data_offset": 0, 00:15:43.167 "data_size": 65536 00:15:43.167 } 00:15:43.167 ] 00:15:43.167 }' 00:15:43.167 13:35:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:43.167 13:35:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:43.736 13:35:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:15:43.736 13:35:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:43.736 13:35:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:43.736 13:35:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:43.736 13:35:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:43.736 13:35:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:43.736 13:35:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:43.736 13:35:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:43.736 13:35:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.736 13:35:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:43.736 [2024-11-20 13:35:42.947054] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:43.736 13:35:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.736 13:35:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:43.736 "name": "Existed_Raid", 00:15:43.736 "aliases": [ 00:15:43.736 "4325e18d-0732-42b9-bba8-96f3c94cc8df" 00:15:43.736 ], 00:15:43.736 "product_name": "Raid Volume", 00:15:43.736 "block_size": 512, 00:15:43.736 "num_blocks": 262144, 00:15:43.736 "uuid": "4325e18d-0732-42b9-bba8-96f3c94cc8df", 00:15:43.736 "assigned_rate_limits": { 00:15:43.736 "rw_ios_per_sec": 0, 00:15:43.736 "rw_mbytes_per_sec": 0, 00:15:43.736 "r_mbytes_per_sec": 0, 00:15:43.736 "w_mbytes_per_sec": 0 00:15:43.736 }, 00:15:43.736 "claimed": false, 00:15:43.736 "zoned": false, 00:15:43.736 "supported_io_types": { 00:15:43.736 "read": true, 00:15:43.736 "write": true, 00:15:43.736 "unmap": true, 00:15:43.736 "flush": true, 00:15:43.736 "reset": true, 00:15:43.736 "nvme_admin": false, 00:15:43.736 "nvme_io": false, 00:15:43.736 "nvme_io_md": false, 00:15:43.736 "write_zeroes": true, 00:15:43.736 "zcopy": false, 00:15:43.736 "get_zone_info": false, 00:15:43.736 "zone_management": false, 00:15:43.736 "zone_append": false, 00:15:43.736 "compare": false, 00:15:43.736 "compare_and_write": false, 00:15:43.736 "abort": false, 00:15:43.736 "seek_hole": false, 00:15:43.736 "seek_data": false, 00:15:43.736 "copy": false, 00:15:43.736 "nvme_iov_md": false 00:15:43.736 }, 00:15:43.736 "memory_domains": [ 00:15:43.736 { 00:15:43.736 "dma_device_id": "system", 00:15:43.736 "dma_device_type": 1 00:15:43.736 }, 00:15:43.736 { 00:15:43.736 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:43.736 "dma_device_type": 2 00:15:43.736 }, 00:15:43.736 { 00:15:43.736 "dma_device_id": "system", 00:15:43.736 "dma_device_type": 1 00:15:43.736 }, 00:15:43.736 { 00:15:43.736 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:43.736 "dma_device_type": 2 00:15:43.736 }, 00:15:43.736 { 00:15:43.736 "dma_device_id": "system", 00:15:43.736 "dma_device_type": 1 00:15:43.736 }, 00:15:43.736 { 00:15:43.736 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:43.736 "dma_device_type": 2 00:15:43.736 }, 00:15:43.736 { 00:15:43.736 "dma_device_id": "system", 00:15:43.736 "dma_device_type": 1 00:15:43.736 }, 00:15:43.736 { 00:15:43.736 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:43.736 "dma_device_type": 2 00:15:43.736 } 00:15:43.736 ], 00:15:43.736 "driver_specific": { 00:15:43.736 "raid": { 00:15:43.736 "uuid": "4325e18d-0732-42b9-bba8-96f3c94cc8df", 00:15:43.736 "strip_size_kb": 64, 00:15:43.736 "state": "online", 00:15:43.736 "raid_level": "raid0", 00:15:43.736 "superblock": false, 00:15:43.736 "num_base_bdevs": 4, 00:15:43.736 "num_base_bdevs_discovered": 4, 00:15:43.736 "num_base_bdevs_operational": 4, 00:15:43.736 "base_bdevs_list": [ 00:15:43.736 { 00:15:43.736 "name": "NewBaseBdev", 00:15:43.736 "uuid": "2ef57e72-a538-4497-909e-211221c2898b", 00:15:43.736 "is_configured": true, 00:15:43.736 "data_offset": 0, 00:15:43.736 "data_size": 65536 00:15:43.736 }, 00:15:43.736 { 00:15:43.736 "name": "BaseBdev2", 00:15:43.736 "uuid": "85ae228a-dc80-4d55-851d-bff0d02f7ad8", 00:15:43.736 "is_configured": true, 00:15:43.736 "data_offset": 0, 00:15:43.736 "data_size": 65536 00:15:43.736 }, 00:15:43.737 { 00:15:43.737 "name": "BaseBdev3", 00:15:43.737 "uuid": "f39cd5b8-b758-40a1-a745-23cdbce1d7b8", 00:15:43.737 "is_configured": true, 00:15:43.737 "data_offset": 0, 00:15:43.737 "data_size": 65536 00:15:43.737 }, 00:15:43.737 { 00:15:43.737 "name": "BaseBdev4", 00:15:43.737 "uuid": "070d2930-5c14-4baf-87d5-951a3680d1c7", 00:15:43.737 "is_configured": true, 00:15:43.737 "data_offset": 0, 00:15:43.737 "data_size": 65536 00:15:43.737 } 00:15:43.737 ] 00:15:43.737 } 00:15:43.737 } 00:15:43.737 }' 00:15:43.737 13:35:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:43.737 13:35:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:15:43.737 BaseBdev2 00:15:43.737 BaseBdev3 00:15:43.737 BaseBdev4' 00:15:43.737 13:35:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:43.737 13:35:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:43.737 13:35:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:43.737 13:35:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:15:43.737 13:35:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:43.737 13:35:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.737 13:35:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:43.737 13:35:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.737 13:35:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:43.737 13:35:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:43.737 13:35:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:43.737 13:35:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:43.737 13:35:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.737 13:35:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:43.737 13:35:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:43.737 13:35:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.737 13:35:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:43.737 13:35:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:43.737 13:35:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:43.737 13:35:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:43.737 13:35:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:43.737 13:35:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.737 13:35:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:43.737 13:35:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.996 13:35:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:43.996 13:35:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:43.996 13:35:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:43.996 13:35:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:43.996 13:35:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:15:43.996 13:35:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.996 13:35:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:43.996 13:35:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.996 13:35:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:43.996 13:35:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:43.996 13:35:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:43.996 13:35:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.996 13:35:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:43.996 [2024-11-20 13:35:43.274452] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:43.996 [2024-11-20 13:35:43.274629] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:43.996 [2024-11-20 13:35:43.274748] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:43.996 [2024-11-20 13:35:43.274820] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:43.996 [2024-11-20 13:35:43.274833] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:15:43.996 13:35:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.996 13:35:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 69139 00:15:43.996 13:35:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 69139 ']' 00:15:43.996 13:35:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 69139 00:15:43.996 13:35:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:15:43.996 13:35:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:43.996 13:35:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69139 00:15:43.996 killing process with pid 69139 00:15:43.996 13:35:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:43.996 13:35:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:43.996 13:35:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69139' 00:15:43.996 13:35:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 69139 00:15:43.996 [2024-11-20 13:35:43.328422] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:43.996 13:35:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 69139 00:15:44.562 [2024-11-20 13:35:43.750485] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:45.499 13:35:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:15:45.499 00:15:45.499 real 0m12.118s 00:15:45.499 user 0m19.144s 00:15:45.499 sys 0m2.566s 00:15:45.499 13:35:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:45.499 ************************************ 00:15:45.499 END TEST raid_state_function_test 00:15:45.499 ************************************ 00:15:45.499 13:35:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.758 13:35:44 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 4 true 00:15:45.758 13:35:44 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:15:45.758 13:35:44 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:45.758 13:35:44 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:45.758 ************************************ 00:15:45.758 START TEST raid_state_function_test_sb 00:15:45.758 ************************************ 00:15:45.758 13:35:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 4 true 00:15:45.758 13:35:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:15:45.758 13:35:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:15:45.758 13:35:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:15:45.758 13:35:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:15:45.758 13:35:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:15:45.758 13:35:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:45.758 13:35:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:15:45.758 13:35:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:45.758 13:35:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:45.758 13:35:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:15:45.758 13:35:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:45.758 13:35:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:45.758 13:35:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:15:45.758 13:35:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:45.758 13:35:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:45.758 13:35:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:15:45.758 13:35:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:45.758 13:35:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:45.758 13:35:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:15:45.758 13:35:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:15:45.758 13:35:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:15:45.758 13:35:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:15:45.758 13:35:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:15:45.758 13:35:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:15:45.758 13:35:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:15:45.758 13:35:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:15:45.758 13:35:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:15:45.758 13:35:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:15:45.758 13:35:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:15:45.758 13:35:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=69810 00:15:45.758 13:35:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:15:45.758 13:35:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 69810' 00:15:45.758 Process raid pid: 69810 00:15:45.758 13:35:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 69810 00:15:45.758 13:35:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 69810 ']' 00:15:45.758 13:35:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:45.758 13:35:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:45.758 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:45.758 13:35:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:45.758 13:35:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:45.758 13:35:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:45.758 [2024-11-20 13:35:45.108926] Starting SPDK v25.01-pre git sha1 82b85d9ca / DPDK 24.03.0 initialization... 00:15:45.758 [2024-11-20 13:35:45.109083] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:46.018 [2024-11-20 13:35:45.279281] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:46.018 [2024-11-20 13:35:45.397016] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:46.277 [2024-11-20 13:35:45.604243] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:46.277 [2024-11-20 13:35:45.604493] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:46.536 13:35:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:46.536 13:35:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:15:46.536 13:35:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:46.536 13:35:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.536 13:35:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:46.536 [2024-11-20 13:35:45.992256] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:46.536 [2024-11-20 13:35:45.992314] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:46.536 [2024-11-20 13:35:45.992325] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:46.536 [2024-11-20 13:35:45.992338] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:46.536 [2024-11-20 13:35:45.992346] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:46.536 [2024-11-20 13:35:45.992359] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:46.536 [2024-11-20 13:35:45.992366] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:46.536 [2024-11-20 13:35:45.992378] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:46.536 13:35:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.536 13:35:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:15:46.536 13:35:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:46.536 13:35:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:46.536 13:35:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:46.536 13:35:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:46.536 13:35:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:46.536 13:35:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:46.536 13:35:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:46.536 13:35:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:46.536 13:35:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:46.536 13:35:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:46.536 13:35:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:46.536 13:35:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.536 13:35:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:46.796 13:35:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.796 13:35:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:46.796 "name": "Existed_Raid", 00:15:46.796 "uuid": "d0d324f2-6653-475a-8405-84c1ce199fbe", 00:15:46.796 "strip_size_kb": 64, 00:15:46.796 "state": "configuring", 00:15:46.796 "raid_level": "raid0", 00:15:46.796 "superblock": true, 00:15:46.796 "num_base_bdevs": 4, 00:15:46.796 "num_base_bdevs_discovered": 0, 00:15:46.796 "num_base_bdevs_operational": 4, 00:15:46.796 "base_bdevs_list": [ 00:15:46.796 { 00:15:46.796 "name": "BaseBdev1", 00:15:46.796 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:46.796 "is_configured": false, 00:15:46.796 "data_offset": 0, 00:15:46.796 "data_size": 0 00:15:46.796 }, 00:15:46.796 { 00:15:46.796 "name": "BaseBdev2", 00:15:46.796 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:46.796 "is_configured": false, 00:15:46.796 "data_offset": 0, 00:15:46.796 "data_size": 0 00:15:46.796 }, 00:15:46.796 { 00:15:46.796 "name": "BaseBdev3", 00:15:46.796 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:46.796 "is_configured": false, 00:15:46.796 "data_offset": 0, 00:15:46.796 "data_size": 0 00:15:46.796 }, 00:15:46.796 { 00:15:46.796 "name": "BaseBdev4", 00:15:46.796 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:46.796 "is_configured": false, 00:15:46.796 "data_offset": 0, 00:15:46.796 "data_size": 0 00:15:46.796 } 00:15:46.796 ] 00:15:46.796 }' 00:15:46.796 13:35:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:46.796 13:35:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:47.055 13:35:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:47.055 13:35:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.055 13:35:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:47.055 [2024-11-20 13:35:46.447554] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:47.055 [2024-11-20 13:35:46.447597] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:15:47.055 13:35:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.055 13:35:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:47.055 13:35:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.055 13:35:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:47.055 [2024-11-20 13:35:46.459537] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:47.055 [2024-11-20 13:35:46.459584] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:47.055 [2024-11-20 13:35:46.459595] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:47.055 [2024-11-20 13:35:46.459607] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:47.055 [2024-11-20 13:35:46.459615] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:47.055 [2024-11-20 13:35:46.459627] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:47.055 [2024-11-20 13:35:46.459635] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:47.055 [2024-11-20 13:35:46.459647] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:47.055 13:35:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.055 13:35:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:47.055 13:35:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.055 13:35:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:47.055 [2024-11-20 13:35:46.506192] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:47.055 BaseBdev1 00:15:47.055 13:35:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.055 13:35:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:15:47.055 13:35:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:15:47.055 13:35:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:47.055 13:35:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:47.055 13:35:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:47.055 13:35:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:47.055 13:35:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:47.055 13:35:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.055 13:35:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:47.055 13:35:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.055 13:35:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:47.055 13:35:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.055 13:35:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:47.055 [ 00:15:47.055 { 00:15:47.055 "name": "BaseBdev1", 00:15:47.055 "aliases": [ 00:15:47.055 "00e81890-86cf-449e-8e19-aa680aaae077" 00:15:47.055 ], 00:15:47.055 "product_name": "Malloc disk", 00:15:47.055 "block_size": 512, 00:15:47.055 "num_blocks": 65536, 00:15:47.055 "uuid": "00e81890-86cf-449e-8e19-aa680aaae077", 00:15:47.055 "assigned_rate_limits": { 00:15:47.055 "rw_ios_per_sec": 0, 00:15:47.055 "rw_mbytes_per_sec": 0, 00:15:47.055 "r_mbytes_per_sec": 0, 00:15:47.055 "w_mbytes_per_sec": 0 00:15:47.055 }, 00:15:47.055 "claimed": true, 00:15:47.315 "claim_type": "exclusive_write", 00:15:47.315 "zoned": false, 00:15:47.315 "supported_io_types": { 00:15:47.315 "read": true, 00:15:47.315 "write": true, 00:15:47.315 "unmap": true, 00:15:47.315 "flush": true, 00:15:47.315 "reset": true, 00:15:47.315 "nvme_admin": false, 00:15:47.315 "nvme_io": false, 00:15:47.315 "nvme_io_md": false, 00:15:47.315 "write_zeroes": true, 00:15:47.315 "zcopy": true, 00:15:47.315 "get_zone_info": false, 00:15:47.315 "zone_management": false, 00:15:47.315 "zone_append": false, 00:15:47.315 "compare": false, 00:15:47.315 "compare_and_write": false, 00:15:47.315 "abort": true, 00:15:47.315 "seek_hole": false, 00:15:47.315 "seek_data": false, 00:15:47.315 "copy": true, 00:15:47.315 "nvme_iov_md": false 00:15:47.315 }, 00:15:47.315 "memory_domains": [ 00:15:47.315 { 00:15:47.315 "dma_device_id": "system", 00:15:47.315 "dma_device_type": 1 00:15:47.315 }, 00:15:47.315 { 00:15:47.315 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:47.315 "dma_device_type": 2 00:15:47.315 } 00:15:47.315 ], 00:15:47.315 "driver_specific": {} 00:15:47.315 } 00:15:47.315 ] 00:15:47.315 13:35:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.315 13:35:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:47.315 13:35:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:15:47.315 13:35:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:47.315 13:35:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:47.315 13:35:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:47.315 13:35:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:47.315 13:35:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:47.315 13:35:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:47.315 13:35:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:47.315 13:35:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:47.315 13:35:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:47.315 13:35:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:47.315 13:35:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.315 13:35:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:47.315 13:35:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:47.315 13:35:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.315 13:35:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:47.315 "name": "Existed_Raid", 00:15:47.315 "uuid": "d1a42491-8a10-4ab9-8e41-533491b0d1a9", 00:15:47.315 "strip_size_kb": 64, 00:15:47.315 "state": "configuring", 00:15:47.315 "raid_level": "raid0", 00:15:47.315 "superblock": true, 00:15:47.315 "num_base_bdevs": 4, 00:15:47.315 "num_base_bdevs_discovered": 1, 00:15:47.315 "num_base_bdevs_operational": 4, 00:15:47.315 "base_bdevs_list": [ 00:15:47.315 { 00:15:47.315 "name": "BaseBdev1", 00:15:47.315 "uuid": "00e81890-86cf-449e-8e19-aa680aaae077", 00:15:47.315 "is_configured": true, 00:15:47.315 "data_offset": 2048, 00:15:47.315 "data_size": 63488 00:15:47.315 }, 00:15:47.315 { 00:15:47.315 "name": "BaseBdev2", 00:15:47.315 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:47.315 "is_configured": false, 00:15:47.315 "data_offset": 0, 00:15:47.315 "data_size": 0 00:15:47.315 }, 00:15:47.315 { 00:15:47.315 "name": "BaseBdev3", 00:15:47.315 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:47.315 "is_configured": false, 00:15:47.315 "data_offset": 0, 00:15:47.315 "data_size": 0 00:15:47.315 }, 00:15:47.315 { 00:15:47.315 "name": "BaseBdev4", 00:15:47.315 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:47.315 "is_configured": false, 00:15:47.315 "data_offset": 0, 00:15:47.315 "data_size": 0 00:15:47.315 } 00:15:47.315 ] 00:15:47.315 }' 00:15:47.315 13:35:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:47.315 13:35:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:47.574 13:35:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:47.574 13:35:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.574 13:35:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:47.574 [2024-11-20 13:35:46.949619] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:47.574 [2024-11-20 13:35:46.949803] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:15:47.574 13:35:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.574 13:35:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:47.574 13:35:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.574 13:35:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:47.574 [2024-11-20 13:35:46.961668] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:47.574 [2024-11-20 13:35:46.963880] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:47.574 [2024-11-20 13:35:46.964034] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:47.574 [2024-11-20 13:35:46.964139] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:47.574 [2024-11-20 13:35:46.964167] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:47.574 [2024-11-20 13:35:46.964176] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:47.574 [2024-11-20 13:35:46.964190] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:47.574 13:35:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.574 13:35:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:15:47.574 13:35:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:47.574 13:35:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:15:47.574 13:35:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:47.574 13:35:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:47.574 13:35:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:47.574 13:35:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:47.574 13:35:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:47.574 13:35:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:47.574 13:35:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:47.574 13:35:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:47.574 13:35:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:47.574 13:35:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:47.574 13:35:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:47.574 13:35:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.574 13:35:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:47.574 13:35:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.574 13:35:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:47.574 "name": "Existed_Raid", 00:15:47.574 "uuid": "fdf7a1a3-1271-4400-a737-3a139283c8b0", 00:15:47.574 "strip_size_kb": 64, 00:15:47.574 "state": "configuring", 00:15:47.574 "raid_level": "raid0", 00:15:47.574 "superblock": true, 00:15:47.574 "num_base_bdevs": 4, 00:15:47.574 "num_base_bdevs_discovered": 1, 00:15:47.574 "num_base_bdevs_operational": 4, 00:15:47.574 "base_bdevs_list": [ 00:15:47.574 { 00:15:47.574 "name": "BaseBdev1", 00:15:47.574 "uuid": "00e81890-86cf-449e-8e19-aa680aaae077", 00:15:47.574 "is_configured": true, 00:15:47.574 "data_offset": 2048, 00:15:47.574 "data_size": 63488 00:15:47.574 }, 00:15:47.574 { 00:15:47.574 "name": "BaseBdev2", 00:15:47.574 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:47.574 "is_configured": false, 00:15:47.574 "data_offset": 0, 00:15:47.574 "data_size": 0 00:15:47.574 }, 00:15:47.574 { 00:15:47.574 "name": "BaseBdev3", 00:15:47.574 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:47.574 "is_configured": false, 00:15:47.574 "data_offset": 0, 00:15:47.574 "data_size": 0 00:15:47.574 }, 00:15:47.574 { 00:15:47.574 "name": "BaseBdev4", 00:15:47.574 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:47.574 "is_configured": false, 00:15:47.574 "data_offset": 0, 00:15:47.574 "data_size": 0 00:15:47.574 } 00:15:47.574 ] 00:15:47.574 }' 00:15:47.574 13:35:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:47.574 13:35:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:48.142 13:35:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:48.142 13:35:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.142 13:35:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:48.142 [2024-11-20 13:35:47.436959] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:48.142 BaseBdev2 00:15:48.142 13:35:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.142 13:35:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:15:48.142 13:35:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:15:48.142 13:35:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:48.142 13:35:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:48.142 13:35:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:48.142 13:35:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:48.142 13:35:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:48.142 13:35:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.142 13:35:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:48.142 13:35:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.142 13:35:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:48.142 13:35:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.142 13:35:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:48.142 [ 00:15:48.142 { 00:15:48.142 "name": "BaseBdev2", 00:15:48.142 "aliases": [ 00:15:48.142 "43d9f781-4a78-44c3-b03c-5785852d41cc" 00:15:48.142 ], 00:15:48.142 "product_name": "Malloc disk", 00:15:48.142 "block_size": 512, 00:15:48.142 "num_blocks": 65536, 00:15:48.142 "uuid": "43d9f781-4a78-44c3-b03c-5785852d41cc", 00:15:48.142 "assigned_rate_limits": { 00:15:48.142 "rw_ios_per_sec": 0, 00:15:48.142 "rw_mbytes_per_sec": 0, 00:15:48.142 "r_mbytes_per_sec": 0, 00:15:48.142 "w_mbytes_per_sec": 0 00:15:48.142 }, 00:15:48.142 "claimed": true, 00:15:48.142 "claim_type": "exclusive_write", 00:15:48.142 "zoned": false, 00:15:48.142 "supported_io_types": { 00:15:48.142 "read": true, 00:15:48.142 "write": true, 00:15:48.142 "unmap": true, 00:15:48.142 "flush": true, 00:15:48.142 "reset": true, 00:15:48.142 "nvme_admin": false, 00:15:48.142 "nvme_io": false, 00:15:48.142 "nvme_io_md": false, 00:15:48.142 "write_zeroes": true, 00:15:48.142 "zcopy": true, 00:15:48.142 "get_zone_info": false, 00:15:48.142 "zone_management": false, 00:15:48.142 "zone_append": false, 00:15:48.142 "compare": false, 00:15:48.142 "compare_and_write": false, 00:15:48.142 "abort": true, 00:15:48.142 "seek_hole": false, 00:15:48.142 "seek_data": false, 00:15:48.142 "copy": true, 00:15:48.142 "nvme_iov_md": false 00:15:48.142 }, 00:15:48.142 "memory_domains": [ 00:15:48.142 { 00:15:48.142 "dma_device_id": "system", 00:15:48.142 "dma_device_type": 1 00:15:48.142 }, 00:15:48.142 { 00:15:48.142 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:48.142 "dma_device_type": 2 00:15:48.142 } 00:15:48.142 ], 00:15:48.142 "driver_specific": {} 00:15:48.142 } 00:15:48.142 ] 00:15:48.142 13:35:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.142 13:35:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:48.142 13:35:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:48.142 13:35:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:48.142 13:35:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:15:48.142 13:35:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:48.142 13:35:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:48.142 13:35:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:48.142 13:35:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:48.142 13:35:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:48.142 13:35:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:48.142 13:35:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:48.142 13:35:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:48.142 13:35:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:48.142 13:35:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:48.142 13:35:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:48.142 13:35:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.142 13:35:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:48.142 13:35:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.142 13:35:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:48.142 "name": "Existed_Raid", 00:15:48.142 "uuid": "fdf7a1a3-1271-4400-a737-3a139283c8b0", 00:15:48.142 "strip_size_kb": 64, 00:15:48.142 "state": "configuring", 00:15:48.142 "raid_level": "raid0", 00:15:48.142 "superblock": true, 00:15:48.142 "num_base_bdevs": 4, 00:15:48.142 "num_base_bdevs_discovered": 2, 00:15:48.142 "num_base_bdevs_operational": 4, 00:15:48.142 "base_bdevs_list": [ 00:15:48.142 { 00:15:48.142 "name": "BaseBdev1", 00:15:48.142 "uuid": "00e81890-86cf-449e-8e19-aa680aaae077", 00:15:48.142 "is_configured": true, 00:15:48.142 "data_offset": 2048, 00:15:48.142 "data_size": 63488 00:15:48.142 }, 00:15:48.142 { 00:15:48.142 "name": "BaseBdev2", 00:15:48.142 "uuid": "43d9f781-4a78-44c3-b03c-5785852d41cc", 00:15:48.142 "is_configured": true, 00:15:48.142 "data_offset": 2048, 00:15:48.142 "data_size": 63488 00:15:48.142 }, 00:15:48.142 { 00:15:48.142 "name": "BaseBdev3", 00:15:48.142 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:48.142 "is_configured": false, 00:15:48.142 "data_offset": 0, 00:15:48.142 "data_size": 0 00:15:48.142 }, 00:15:48.142 { 00:15:48.142 "name": "BaseBdev4", 00:15:48.142 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:48.142 "is_configured": false, 00:15:48.142 "data_offset": 0, 00:15:48.142 "data_size": 0 00:15:48.142 } 00:15:48.142 ] 00:15:48.142 }' 00:15:48.142 13:35:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:48.142 13:35:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:48.402 13:35:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:48.402 13:35:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.402 13:35:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:48.662 [2024-11-20 13:35:47.906588] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:48.662 BaseBdev3 00:15:48.662 13:35:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.662 13:35:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:15:48.662 13:35:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:15:48.662 13:35:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:48.662 13:35:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:48.662 13:35:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:48.662 13:35:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:48.662 13:35:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:48.662 13:35:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.662 13:35:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:48.662 13:35:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.662 13:35:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:48.662 13:35:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.662 13:35:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:48.662 [ 00:15:48.662 { 00:15:48.662 "name": "BaseBdev3", 00:15:48.662 "aliases": [ 00:15:48.662 "4be2ac83-6435-438c-ad17-2db997053654" 00:15:48.662 ], 00:15:48.662 "product_name": "Malloc disk", 00:15:48.662 "block_size": 512, 00:15:48.662 "num_blocks": 65536, 00:15:48.662 "uuid": "4be2ac83-6435-438c-ad17-2db997053654", 00:15:48.662 "assigned_rate_limits": { 00:15:48.662 "rw_ios_per_sec": 0, 00:15:48.662 "rw_mbytes_per_sec": 0, 00:15:48.662 "r_mbytes_per_sec": 0, 00:15:48.662 "w_mbytes_per_sec": 0 00:15:48.662 }, 00:15:48.662 "claimed": true, 00:15:48.662 "claim_type": "exclusive_write", 00:15:48.662 "zoned": false, 00:15:48.662 "supported_io_types": { 00:15:48.662 "read": true, 00:15:48.662 "write": true, 00:15:48.662 "unmap": true, 00:15:48.662 "flush": true, 00:15:48.662 "reset": true, 00:15:48.662 "nvme_admin": false, 00:15:48.662 "nvme_io": false, 00:15:48.662 "nvme_io_md": false, 00:15:48.662 "write_zeroes": true, 00:15:48.662 "zcopy": true, 00:15:48.662 "get_zone_info": false, 00:15:48.662 "zone_management": false, 00:15:48.662 "zone_append": false, 00:15:48.662 "compare": false, 00:15:48.662 "compare_and_write": false, 00:15:48.662 "abort": true, 00:15:48.662 "seek_hole": false, 00:15:48.662 "seek_data": false, 00:15:48.662 "copy": true, 00:15:48.662 "nvme_iov_md": false 00:15:48.662 }, 00:15:48.662 "memory_domains": [ 00:15:48.662 { 00:15:48.662 "dma_device_id": "system", 00:15:48.662 "dma_device_type": 1 00:15:48.662 }, 00:15:48.662 { 00:15:48.662 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:48.662 "dma_device_type": 2 00:15:48.662 } 00:15:48.662 ], 00:15:48.662 "driver_specific": {} 00:15:48.662 } 00:15:48.662 ] 00:15:48.662 13:35:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.662 13:35:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:48.662 13:35:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:48.662 13:35:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:48.662 13:35:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:15:48.662 13:35:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:48.662 13:35:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:48.662 13:35:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:48.662 13:35:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:48.662 13:35:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:48.662 13:35:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:48.662 13:35:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:48.662 13:35:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:48.662 13:35:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:48.662 13:35:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:48.662 13:35:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.662 13:35:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:48.662 13:35:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:48.662 13:35:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.662 13:35:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:48.662 "name": "Existed_Raid", 00:15:48.662 "uuid": "fdf7a1a3-1271-4400-a737-3a139283c8b0", 00:15:48.662 "strip_size_kb": 64, 00:15:48.662 "state": "configuring", 00:15:48.662 "raid_level": "raid0", 00:15:48.662 "superblock": true, 00:15:48.662 "num_base_bdevs": 4, 00:15:48.662 "num_base_bdevs_discovered": 3, 00:15:48.662 "num_base_bdevs_operational": 4, 00:15:48.662 "base_bdevs_list": [ 00:15:48.662 { 00:15:48.662 "name": "BaseBdev1", 00:15:48.662 "uuid": "00e81890-86cf-449e-8e19-aa680aaae077", 00:15:48.662 "is_configured": true, 00:15:48.662 "data_offset": 2048, 00:15:48.662 "data_size": 63488 00:15:48.662 }, 00:15:48.662 { 00:15:48.662 "name": "BaseBdev2", 00:15:48.662 "uuid": "43d9f781-4a78-44c3-b03c-5785852d41cc", 00:15:48.662 "is_configured": true, 00:15:48.662 "data_offset": 2048, 00:15:48.662 "data_size": 63488 00:15:48.662 }, 00:15:48.662 { 00:15:48.662 "name": "BaseBdev3", 00:15:48.662 "uuid": "4be2ac83-6435-438c-ad17-2db997053654", 00:15:48.662 "is_configured": true, 00:15:48.662 "data_offset": 2048, 00:15:48.662 "data_size": 63488 00:15:48.662 }, 00:15:48.662 { 00:15:48.662 "name": "BaseBdev4", 00:15:48.662 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:48.662 "is_configured": false, 00:15:48.662 "data_offset": 0, 00:15:48.662 "data_size": 0 00:15:48.662 } 00:15:48.662 ] 00:15:48.662 }' 00:15:48.662 13:35:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:48.663 13:35:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:48.926 13:35:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:15:48.926 13:35:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.926 13:35:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:49.209 [2024-11-20 13:35:48.413648] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:49.209 [2024-11-20 13:35:48.413933] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:15:49.209 [2024-11-20 13:35:48.413950] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:15:49.209 [2024-11-20 13:35:48.414312] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:15:49.209 BaseBdev4 00:15:49.209 [2024-11-20 13:35:48.414458] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:15:49.209 [2024-11-20 13:35:48.414472] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:15:49.209 [2024-11-20 13:35:48.414620] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:49.209 13:35:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.209 13:35:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:15:49.209 13:35:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:15:49.209 13:35:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:49.209 13:35:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:49.209 13:35:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:49.209 13:35:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:49.209 13:35:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:49.209 13:35:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.209 13:35:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:49.209 13:35:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.209 13:35:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:15:49.209 13:35:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.209 13:35:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:49.209 [ 00:15:49.209 { 00:15:49.209 "name": "BaseBdev4", 00:15:49.209 "aliases": [ 00:15:49.209 "81e66833-1f04-4f4b-b453-f3848bbe1b79" 00:15:49.209 ], 00:15:49.209 "product_name": "Malloc disk", 00:15:49.209 "block_size": 512, 00:15:49.209 "num_blocks": 65536, 00:15:49.209 "uuid": "81e66833-1f04-4f4b-b453-f3848bbe1b79", 00:15:49.209 "assigned_rate_limits": { 00:15:49.209 "rw_ios_per_sec": 0, 00:15:49.209 "rw_mbytes_per_sec": 0, 00:15:49.210 "r_mbytes_per_sec": 0, 00:15:49.210 "w_mbytes_per_sec": 0 00:15:49.210 }, 00:15:49.210 "claimed": true, 00:15:49.210 "claim_type": "exclusive_write", 00:15:49.210 "zoned": false, 00:15:49.210 "supported_io_types": { 00:15:49.210 "read": true, 00:15:49.210 "write": true, 00:15:49.210 "unmap": true, 00:15:49.210 "flush": true, 00:15:49.210 "reset": true, 00:15:49.210 "nvme_admin": false, 00:15:49.210 "nvme_io": false, 00:15:49.210 "nvme_io_md": false, 00:15:49.210 "write_zeroes": true, 00:15:49.210 "zcopy": true, 00:15:49.210 "get_zone_info": false, 00:15:49.210 "zone_management": false, 00:15:49.210 "zone_append": false, 00:15:49.210 "compare": false, 00:15:49.210 "compare_and_write": false, 00:15:49.210 "abort": true, 00:15:49.210 "seek_hole": false, 00:15:49.210 "seek_data": false, 00:15:49.210 "copy": true, 00:15:49.210 "nvme_iov_md": false 00:15:49.210 }, 00:15:49.210 "memory_domains": [ 00:15:49.210 { 00:15:49.210 "dma_device_id": "system", 00:15:49.210 "dma_device_type": 1 00:15:49.210 }, 00:15:49.210 { 00:15:49.210 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:49.210 "dma_device_type": 2 00:15:49.210 } 00:15:49.210 ], 00:15:49.210 "driver_specific": {} 00:15:49.210 } 00:15:49.210 ] 00:15:49.210 13:35:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.210 13:35:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:49.210 13:35:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:49.210 13:35:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:49.210 13:35:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:15:49.210 13:35:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:49.210 13:35:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:49.210 13:35:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:49.210 13:35:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:49.210 13:35:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:49.210 13:35:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:49.210 13:35:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:49.210 13:35:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:49.210 13:35:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:49.210 13:35:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:49.210 13:35:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:49.210 13:35:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.210 13:35:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:49.210 13:35:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.210 13:35:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:49.210 "name": "Existed_Raid", 00:15:49.210 "uuid": "fdf7a1a3-1271-4400-a737-3a139283c8b0", 00:15:49.210 "strip_size_kb": 64, 00:15:49.210 "state": "online", 00:15:49.210 "raid_level": "raid0", 00:15:49.210 "superblock": true, 00:15:49.210 "num_base_bdevs": 4, 00:15:49.210 "num_base_bdevs_discovered": 4, 00:15:49.210 "num_base_bdevs_operational": 4, 00:15:49.210 "base_bdevs_list": [ 00:15:49.210 { 00:15:49.210 "name": "BaseBdev1", 00:15:49.210 "uuid": "00e81890-86cf-449e-8e19-aa680aaae077", 00:15:49.210 "is_configured": true, 00:15:49.210 "data_offset": 2048, 00:15:49.210 "data_size": 63488 00:15:49.210 }, 00:15:49.210 { 00:15:49.210 "name": "BaseBdev2", 00:15:49.210 "uuid": "43d9f781-4a78-44c3-b03c-5785852d41cc", 00:15:49.210 "is_configured": true, 00:15:49.210 "data_offset": 2048, 00:15:49.210 "data_size": 63488 00:15:49.210 }, 00:15:49.210 { 00:15:49.210 "name": "BaseBdev3", 00:15:49.210 "uuid": "4be2ac83-6435-438c-ad17-2db997053654", 00:15:49.210 "is_configured": true, 00:15:49.210 "data_offset": 2048, 00:15:49.210 "data_size": 63488 00:15:49.210 }, 00:15:49.210 { 00:15:49.210 "name": "BaseBdev4", 00:15:49.210 "uuid": "81e66833-1f04-4f4b-b453-f3848bbe1b79", 00:15:49.210 "is_configured": true, 00:15:49.210 "data_offset": 2048, 00:15:49.210 "data_size": 63488 00:15:49.210 } 00:15:49.210 ] 00:15:49.210 }' 00:15:49.210 13:35:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:49.210 13:35:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:49.470 13:35:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:15:49.470 13:35:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:49.470 13:35:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:49.470 13:35:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:49.470 13:35:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:15:49.470 13:35:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:49.470 13:35:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:49.470 13:35:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:49.470 13:35:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.470 13:35:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:49.470 [2024-11-20 13:35:48.881534] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:49.470 13:35:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.470 13:35:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:49.470 "name": "Existed_Raid", 00:15:49.470 "aliases": [ 00:15:49.470 "fdf7a1a3-1271-4400-a737-3a139283c8b0" 00:15:49.470 ], 00:15:49.470 "product_name": "Raid Volume", 00:15:49.470 "block_size": 512, 00:15:49.470 "num_blocks": 253952, 00:15:49.470 "uuid": "fdf7a1a3-1271-4400-a737-3a139283c8b0", 00:15:49.470 "assigned_rate_limits": { 00:15:49.470 "rw_ios_per_sec": 0, 00:15:49.470 "rw_mbytes_per_sec": 0, 00:15:49.470 "r_mbytes_per_sec": 0, 00:15:49.470 "w_mbytes_per_sec": 0 00:15:49.470 }, 00:15:49.470 "claimed": false, 00:15:49.470 "zoned": false, 00:15:49.470 "supported_io_types": { 00:15:49.470 "read": true, 00:15:49.470 "write": true, 00:15:49.470 "unmap": true, 00:15:49.470 "flush": true, 00:15:49.470 "reset": true, 00:15:49.470 "nvme_admin": false, 00:15:49.470 "nvme_io": false, 00:15:49.470 "nvme_io_md": false, 00:15:49.470 "write_zeroes": true, 00:15:49.470 "zcopy": false, 00:15:49.470 "get_zone_info": false, 00:15:49.470 "zone_management": false, 00:15:49.470 "zone_append": false, 00:15:49.470 "compare": false, 00:15:49.470 "compare_and_write": false, 00:15:49.470 "abort": false, 00:15:49.470 "seek_hole": false, 00:15:49.470 "seek_data": false, 00:15:49.470 "copy": false, 00:15:49.470 "nvme_iov_md": false 00:15:49.470 }, 00:15:49.470 "memory_domains": [ 00:15:49.470 { 00:15:49.470 "dma_device_id": "system", 00:15:49.470 "dma_device_type": 1 00:15:49.470 }, 00:15:49.471 { 00:15:49.471 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:49.471 "dma_device_type": 2 00:15:49.471 }, 00:15:49.471 { 00:15:49.471 "dma_device_id": "system", 00:15:49.471 "dma_device_type": 1 00:15:49.471 }, 00:15:49.471 { 00:15:49.471 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:49.471 "dma_device_type": 2 00:15:49.471 }, 00:15:49.471 { 00:15:49.471 "dma_device_id": "system", 00:15:49.471 "dma_device_type": 1 00:15:49.471 }, 00:15:49.471 { 00:15:49.471 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:49.471 "dma_device_type": 2 00:15:49.471 }, 00:15:49.471 { 00:15:49.471 "dma_device_id": "system", 00:15:49.471 "dma_device_type": 1 00:15:49.471 }, 00:15:49.471 { 00:15:49.471 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:49.471 "dma_device_type": 2 00:15:49.471 } 00:15:49.471 ], 00:15:49.471 "driver_specific": { 00:15:49.471 "raid": { 00:15:49.471 "uuid": "fdf7a1a3-1271-4400-a737-3a139283c8b0", 00:15:49.471 "strip_size_kb": 64, 00:15:49.471 "state": "online", 00:15:49.471 "raid_level": "raid0", 00:15:49.471 "superblock": true, 00:15:49.471 "num_base_bdevs": 4, 00:15:49.471 "num_base_bdevs_discovered": 4, 00:15:49.471 "num_base_bdevs_operational": 4, 00:15:49.471 "base_bdevs_list": [ 00:15:49.471 { 00:15:49.471 "name": "BaseBdev1", 00:15:49.471 "uuid": "00e81890-86cf-449e-8e19-aa680aaae077", 00:15:49.471 "is_configured": true, 00:15:49.471 "data_offset": 2048, 00:15:49.471 "data_size": 63488 00:15:49.471 }, 00:15:49.471 { 00:15:49.471 "name": "BaseBdev2", 00:15:49.471 "uuid": "43d9f781-4a78-44c3-b03c-5785852d41cc", 00:15:49.471 "is_configured": true, 00:15:49.471 "data_offset": 2048, 00:15:49.471 "data_size": 63488 00:15:49.471 }, 00:15:49.471 { 00:15:49.471 "name": "BaseBdev3", 00:15:49.471 "uuid": "4be2ac83-6435-438c-ad17-2db997053654", 00:15:49.471 "is_configured": true, 00:15:49.471 "data_offset": 2048, 00:15:49.471 "data_size": 63488 00:15:49.471 }, 00:15:49.471 { 00:15:49.471 "name": "BaseBdev4", 00:15:49.471 "uuid": "81e66833-1f04-4f4b-b453-f3848bbe1b79", 00:15:49.471 "is_configured": true, 00:15:49.471 "data_offset": 2048, 00:15:49.471 "data_size": 63488 00:15:49.471 } 00:15:49.471 ] 00:15:49.471 } 00:15:49.471 } 00:15:49.471 }' 00:15:49.471 13:35:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:49.730 13:35:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:15:49.730 BaseBdev2 00:15:49.730 BaseBdev3 00:15:49.730 BaseBdev4' 00:15:49.730 13:35:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:49.730 13:35:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:49.730 13:35:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:49.730 13:35:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:49.730 13:35:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:15:49.730 13:35:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.730 13:35:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:49.730 13:35:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.730 13:35:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:49.730 13:35:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:49.730 13:35:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:49.730 13:35:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:49.730 13:35:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:49.730 13:35:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.730 13:35:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:49.730 13:35:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.730 13:35:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:49.730 13:35:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:49.730 13:35:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:49.730 13:35:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:49.730 13:35:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.730 13:35:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:49.730 13:35:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:49.730 13:35:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.730 13:35:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:49.730 13:35:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:49.730 13:35:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:49.730 13:35:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:49.730 13:35:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:15:49.730 13:35:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.730 13:35:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:49.730 13:35:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.730 13:35:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:49.730 13:35:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:49.730 13:35:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:49.730 13:35:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.730 13:35:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:49.730 [2024-11-20 13:35:49.173020] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:49.730 [2024-11-20 13:35:49.173053] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:49.730 [2024-11-20 13:35:49.173128] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:49.988 13:35:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.988 13:35:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:15:49.988 13:35:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:15:49.988 13:35:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:49.988 13:35:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:15:49.988 13:35:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:15:49.988 13:35:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:15:49.988 13:35:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:49.988 13:35:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:15:49.988 13:35:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:49.988 13:35:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:49.988 13:35:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:49.988 13:35:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:49.988 13:35:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:49.988 13:35:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:49.988 13:35:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:49.988 13:35:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:49.988 13:35:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:49.988 13:35:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.988 13:35:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:49.988 13:35:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.988 13:35:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:49.988 "name": "Existed_Raid", 00:15:49.988 "uuid": "fdf7a1a3-1271-4400-a737-3a139283c8b0", 00:15:49.988 "strip_size_kb": 64, 00:15:49.988 "state": "offline", 00:15:49.988 "raid_level": "raid0", 00:15:49.988 "superblock": true, 00:15:49.988 "num_base_bdevs": 4, 00:15:49.988 "num_base_bdevs_discovered": 3, 00:15:49.988 "num_base_bdevs_operational": 3, 00:15:49.988 "base_bdevs_list": [ 00:15:49.988 { 00:15:49.988 "name": null, 00:15:49.988 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:49.988 "is_configured": false, 00:15:49.988 "data_offset": 0, 00:15:49.988 "data_size": 63488 00:15:49.988 }, 00:15:49.988 { 00:15:49.988 "name": "BaseBdev2", 00:15:49.988 "uuid": "43d9f781-4a78-44c3-b03c-5785852d41cc", 00:15:49.988 "is_configured": true, 00:15:49.988 "data_offset": 2048, 00:15:49.988 "data_size": 63488 00:15:49.988 }, 00:15:49.988 { 00:15:49.988 "name": "BaseBdev3", 00:15:49.988 "uuid": "4be2ac83-6435-438c-ad17-2db997053654", 00:15:49.988 "is_configured": true, 00:15:49.988 "data_offset": 2048, 00:15:49.988 "data_size": 63488 00:15:49.988 }, 00:15:49.988 { 00:15:49.988 "name": "BaseBdev4", 00:15:49.988 "uuid": "81e66833-1f04-4f4b-b453-f3848bbe1b79", 00:15:49.988 "is_configured": true, 00:15:49.988 "data_offset": 2048, 00:15:49.988 "data_size": 63488 00:15:49.988 } 00:15:49.988 ] 00:15:49.988 }' 00:15:49.988 13:35:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:49.988 13:35:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:50.247 13:35:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:15:50.247 13:35:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:50.247 13:35:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:50.247 13:35:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:50.247 13:35:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.247 13:35:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:50.247 13:35:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.247 13:35:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:50.247 13:35:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:50.247 13:35:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:15:50.247 13:35:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.247 13:35:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:50.507 [2024-11-20 13:35:49.733379] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:50.507 13:35:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.507 13:35:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:50.507 13:35:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:50.507 13:35:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:50.507 13:35:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:50.507 13:35:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.507 13:35:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:50.507 13:35:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.507 13:35:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:50.507 13:35:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:50.507 13:35:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:15:50.507 13:35:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.507 13:35:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:50.507 [2024-11-20 13:35:49.889167] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:50.507 13:35:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.507 13:35:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:50.507 13:35:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:50.766 13:35:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:50.766 13:35:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:50.766 13:35:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.766 13:35:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:50.766 13:35:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.766 13:35:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:50.766 13:35:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:50.766 13:35:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:15:50.766 13:35:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.766 13:35:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:50.766 [2024-11-20 13:35:50.036462] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:15:50.766 [2024-11-20 13:35:50.036518] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:15:50.766 13:35:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.766 13:35:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:50.766 13:35:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:50.766 13:35:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:50.766 13:35:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:15:50.766 13:35:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.766 13:35:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:50.766 13:35:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.766 13:35:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:15:50.766 13:35:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:15:50.766 13:35:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:15:50.766 13:35:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:15:50.766 13:35:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:50.766 13:35:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:50.766 13:35:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.766 13:35:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:50.766 BaseBdev2 00:15:50.766 13:35:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.766 13:35:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:15:50.766 13:35:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:15:50.766 13:35:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:50.766 13:35:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:50.766 13:35:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:50.766 13:35:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:50.766 13:35:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:50.766 13:35:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.766 13:35:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:50.766 13:35:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.766 13:35:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:50.766 13:35:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.766 13:35:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:51.025 [ 00:15:51.025 { 00:15:51.025 "name": "BaseBdev2", 00:15:51.025 "aliases": [ 00:15:51.025 "6d0e4e30-8840-49e6-b509-618d205d012d" 00:15:51.025 ], 00:15:51.025 "product_name": "Malloc disk", 00:15:51.025 "block_size": 512, 00:15:51.025 "num_blocks": 65536, 00:15:51.025 "uuid": "6d0e4e30-8840-49e6-b509-618d205d012d", 00:15:51.025 "assigned_rate_limits": { 00:15:51.025 "rw_ios_per_sec": 0, 00:15:51.025 "rw_mbytes_per_sec": 0, 00:15:51.025 "r_mbytes_per_sec": 0, 00:15:51.025 "w_mbytes_per_sec": 0 00:15:51.025 }, 00:15:51.025 "claimed": false, 00:15:51.025 "zoned": false, 00:15:51.025 "supported_io_types": { 00:15:51.025 "read": true, 00:15:51.025 "write": true, 00:15:51.025 "unmap": true, 00:15:51.025 "flush": true, 00:15:51.025 "reset": true, 00:15:51.025 "nvme_admin": false, 00:15:51.025 "nvme_io": false, 00:15:51.025 "nvme_io_md": false, 00:15:51.025 "write_zeroes": true, 00:15:51.025 "zcopy": true, 00:15:51.025 "get_zone_info": false, 00:15:51.025 "zone_management": false, 00:15:51.025 "zone_append": false, 00:15:51.025 "compare": false, 00:15:51.025 "compare_and_write": false, 00:15:51.025 "abort": true, 00:15:51.025 "seek_hole": false, 00:15:51.025 "seek_data": false, 00:15:51.025 "copy": true, 00:15:51.025 "nvme_iov_md": false 00:15:51.025 }, 00:15:51.025 "memory_domains": [ 00:15:51.025 { 00:15:51.025 "dma_device_id": "system", 00:15:51.025 "dma_device_type": 1 00:15:51.025 }, 00:15:51.025 { 00:15:51.025 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:51.025 "dma_device_type": 2 00:15:51.025 } 00:15:51.025 ], 00:15:51.025 "driver_specific": {} 00:15:51.025 } 00:15:51.025 ] 00:15:51.025 13:35:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.025 13:35:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:51.025 13:35:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:51.025 13:35:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:51.025 13:35:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:51.025 13:35:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.025 13:35:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:51.025 BaseBdev3 00:15:51.025 13:35:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.025 13:35:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:15:51.025 13:35:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:15:51.025 13:35:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:51.025 13:35:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:51.025 13:35:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:51.025 13:35:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:51.025 13:35:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:51.025 13:35:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.025 13:35:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:51.025 13:35:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.025 13:35:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:51.025 13:35:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.025 13:35:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:51.025 [ 00:15:51.025 { 00:15:51.025 "name": "BaseBdev3", 00:15:51.025 "aliases": [ 00:15:51.025 "e6e8c7ca-79d8-439b-b3d3-a216bfe9e38f" 00:15:51.025 ], 00:15:51.025 "product_name": "Malloc disk", 00:15:51.025 "block_size": 512, 00:15:51.025 "num_blocks": 65536, 00:15:51.025 "uuid": "e6e8c7ca-79d8-439b-b3d3-a216bfe9e38f", 00:15:51.025 "assigned_rate_limits": { 00:15:51.025 "rw_ios_per_sec": 0, 00:15:51.025 "rw_mbytes_per_sec": 0, 00:15:51.025 "r_mbytes_per_sec": 0, 00:15:51.025 "w_mbytes_per_sec": 0 00:15:51.025 }, 00:15:51.025 "claimed": false, 00:15:51.025 "zoned": false, 00:15:51.025 "supported_io_types": { 00:15:51.025 "read": true, 00:15:51.025 "write": true, 00:15:51.025 "unmap": true, 00:15:51.025 "flush": true, 00:15:51.025 "reset": true, 00:15:51.025 "nvme_admin": false, 00:15:51.025 "nvme_io": false, 00:15:51.025 "nvme_io_md": false, 00:15:51.025 "write_zeroes": true, 00:15:51.025 "zcopy": true, 00:15:51.025 "get_zone_info": false, 00:15:51.025 "zone_management": false, 00:15:51.025 "zone_append": false, 00:15:51.025 "compare": false, 00:15:51.025 "compare_and_write": false, 00:15:51.025 "abort": true, 00:15:51.025 "seek_hole": false, 00:15:51.025 "seek_data": false, 00:15:51.025 "copy": true, 00:15:51.025 "nvme_iov_md": false 00:15:51.025 }, 00:15:51.025 "memory_domains": [ 00:15:51.025 { 00:15:51.025 "dma_device_id": "system", 00:15:51.025 "dma_device_type": 1 00:15:51.025 }, 00:15:51.025 { 00:15:51.025 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:51.025 "dma_device_type": 2 00:15:51.025 } 00:15:51.025 ], 00:15:51.025 "driver_specific": {} 00:15:51.025 } 00:15:51.025 ] 00:15:51.025 13:35:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.025 13:35:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:51.025 13:35:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:51.025 13:35:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:51.025 13:35:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:15:51.025 13:35:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.025 13:35:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:51.025 BaseBdev4 00:15:51.025 13:35:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.025 13:35:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:15:51.025 13:35:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:15:51.025 13:35:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:51.025 13:35:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:51.025 13:35:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:51.025 13:35:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:51.025 13:35:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:51.025 13:35:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.025 13:35:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:51.025 13:35:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.025 13:35:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:15:51.025 13:35:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.025 13:35:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:51.025 [ 00:15:51.025 { 00:15:51.025 "name": "BaseBdev4", 00:15:51.025 "aliases": [ 00:15:51.025 "b402af76-b9e9-4474-8938-80260e6dffc8" 00:15:51.025 ], 00:15:51.025 "product_name": "Malloc disk", 00:15:51.025 "block_size": 512, 00:15:51.025 "num_blocks": 65536, 00:15:51.025 "uuid": "b402af76-b9e9-4474-8938-80260e6dffc8", 00:15:51.025 "assigned_rate_limits": { 00:15:51.025 "rw_ios_per_sec": 0, 00:15:51.025 "rw_mbytes_per_sec": 0, 00:15:51.025 "r_mbytes_per_sec": 0, 00:15:51.025 "w_mbytes_per_sec": 0 00:15:51.025 }, 00:15:51.025 "claimed": false, 00:15:51.025 "zoned": false, 00:15:51.025 "supported_io_types": { 00:15:51.025 "read": true, 00:15:51.025 "write": true, 00:15:51.025 "unmap": true, 00:15:51.025 "flush": true, 00:15:51.025 "reset": true, 00:15:51.025 "nvme_admin": false, 00:15:51.025 "nvme_io": false, 00:15:51.025 "nvme_io_md": false, 00:15:51.025 "write_zeroes": true, 00:15:51.025 "zcopy": true, 00:15:51.025 "get_zone_info": false, 00:15:51.025 "zone_management": false, 00:15:51.025 "zone_append": false, 00:15:51.025 "compare": false, 00:15:51.025 "compare_and_write": false, 00:15:51.025 "abort": true, 00:15:51.025 "seek_hole": false, 00:15:51.025 "seek_data": false, 00:15:51.025 "copy": true, 00:15:51.025 "nvme_iov_md": false 00:15:51.025 }, 00:15:51.025 "memory_domains": [ 00:15:51.025 { 00:15:51.025 "dma_device_id": "system", 00:15:51.025 "dma_device_type": 1 00:15:51.025 }, 00:15:51.025 { 00:15:51.025 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:51.025 "dma_device_type": 2 00:15:51.025 } 00:15:51.025 ], 00:15:51.025 "driver_specific": {} 00:15:51.025 } 00:15:51.025 ] 00:15:51.025 13:35:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.025 13:35:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:51.025 13:35:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:51.025 13:35:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:51.025 13:35:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:51.025 13:35:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.025 13:35:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:51.025 [2024-11-20 13:35:50.464252] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:51.025 [2024-11-20 13:35:50.464299] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:51.025 [2024-11-20 13:35:50.464326] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:51.025 [2024-11-20 13:35:50.466473] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:51.025 [2024-11-20 13:35:50.466676] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:51.025 13:35:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.026 13:35:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:15:51.026 13:35:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:51.026 13:35:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:51.026 13:35:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:51.026 13:35:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:51.026 13:35:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:51.026 13:35:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:51.026 13:35:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:51.026 13:35:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:51.026 13:35:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:51.026 13:35:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:51.026 13:35:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.026 13:35:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:51.026 13:35:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:51.026 13:35:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.284 13:35:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:51.284 "name": "Existed_Raid", 00:15:51.284 "uuid": "8f1f3610-ab9f-40b2-b3d6-64e285d0f20d", 00:15:51.284 "strip_size_kb": 64, 00:15:51.284 "state": "configuring", 00:15:51.284 "raid_level": "raid0", 00:15:51.284 "superblock": true, 00:15:51.284 "num_base_bdevs": 4, 00:15:51.284 "num_base_bdevs_discovered": 3, 00:15:51.284 "num_base_bdevs_operational": 4, 00:15:51.284 "base_bdevs_list": [ 00:15:51.284 { 00:15:51.284 "name": "BaseBdev1", 00:15:51.284 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:51.284 "is_configured": false, 00:15:51.284 "data_offset": 0, 00:15:51.284 "data_size": 0 00:15:51.284 }, 00:15:51.284 { 00:15:51.284 "name": "BaseBdev2", 00:15:51.284 "uuid": "6d0e4e30-8840-49e6-b509-618d205d012d", 00:15:51.284 "is_configured": true, 00:15:51.284 "data_offset": 2048, 00:15:51.284 "data_size": 63488 00:15:51.284 }, 00:15:51.285 { 00:15:51.285 "name": "BaseBdev3", 00:15:51.285 "uuid": "e6e8c7ca-79d8-439b-b3d3-a216bfe9e38f", 00:15:51.285 "is_configured": true, 00:15:51.285 "data_offset": 2048, 00:15:51.285 "data_size": 63488 00:15:51.285 }, 00:15:51.285 { 00:15:51.285 "name": "BaseBdev4", 00:15:51.285 "uuid": "b402af76-b9e9-4474-8938-80260e6dffc8", 00:15:51.285 "is_configured": true, 00:15:51.285 "data_offset": 2048, 00:15:51.285 "data_size": 63488 00:15:51.285 } 00:15:51.285 ] 00:15:51.285 }' 00:15:51.285 13:35:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:51.285 13:35:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:51.544 13:35:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:15:51.544 13:35:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.544 13:35:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:51.544 [2024-11-20 13:35:50.891678] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:51.544 13:35:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.544 13:35:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:15:51.544 13:35:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:51.544 13:35:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:51.544 13:35:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:51.544 13:35:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:51.544 13:35:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:51.544 13:35:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:51.544 13:35:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:51.544 13:35:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:51.544 13:35:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:51.544 13:35:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:51.544 13:35:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.544 13:35:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:51.544 13:35:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:51.544 13:35:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.544 13:35:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:51.544 "name": "Existed_Raid", 00:15:51.544 "uuid": "8f1f3610-ab9f-40b2-b3d6-64e285d0f20d", 00:15:51.544 "strip_size_kb": 64, 00:15:51.544 "state": "configuring", 00:15:51.544 "raid_level": "raid0", 00:15:51.544 "superblock": true, 00:15:51.544 "num_base_bdevs": 4, 00:15:51.544 "num_base_bdevs_discovered": 2, 00:15:51.544 "num_base_bdevs_operational": 4, 00:15:51.544 "base_bdevs_list": [ 00:15:51.544 { 00:15:51.544 "name": "BaseBdev1", 00:15:51.544 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:51.544 "is_configured": false, 00:15:51.544 "data_offset": 0, 00:15:51.544 "data_size": 0 00:15:51.544 }, 00:15:51.544 { 00:15:51.544 "name": null, 00:15:51.544 "uuid": "6d0e4e30-8840-49e6-b509-618d205d012d", 00:15:51.544 "is_configured": false, 00:15:51.544 "data_offset": 0, 00:15:51.544 "data_size": 63488 00:15:51.544 }, 00:15:51.544 { 00:15:51.544 "name": "BaseBdev3", 00:15:51.544 "uuid": "e6e8c7ca-79d8-439b-b3d3-a216bfe9e38f", 00:15:51.544 "is_configured": true, 00:15:51.544 "data_offset": 2048, 00:15:51.544 "data_size": 63488 00:15:51.544 }, 00:15:51.544 { 00:15:51.544 "name": "BaseBdev4", 00:15:51.544 "uuid": "b402af76-b9e9-4474-8938-80260e6dffc8", 00:15:51.544 "is_configured": true, 00:15:51.544 "data_offset": 2048, 00:15:51.544 "data_size": 63488 00:15:51.544 } 00:15:51.544 ] 00:15:51.544 }' 00:15:51.544 13:35:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:51.544 13:35:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:52.111 13:35:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:52.111 13:35:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.111 13:35:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:52.111 13:35:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:52.111 13:35:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.111 13:35:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:15:52.111 13:35:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:52.111 13:35:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.111 13:35:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:52.111 [2024-11-20 13:35:51.378101] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:52.111 BaseBdev1 00:15:52.111 13:35:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.111 13:35:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:15:52.111 13:35:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:15:52.111 13:35:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:52.111 13:35:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:52.111 13:35:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:52.111 13:35:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:52.111 13:35:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:52.111 13:35:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.111 13:35:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:52.111 13:35:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.111 13:35:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:52.111 13:35:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.111 13:35:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:52.111 [ 00:15:52.111 { 00:15:52.111 "name": "BaseBdev1", 00:15:52.111 "aliases": [ 00:15:52.111 "a000fa9a-7303-4d71-999d-53429f1a5cf7" 00:15:52.111 ], 00:15:52.112 "product_name": "Malloc disk", 00:15:52.112 "block_size": 512, 00:15:52.112 "num_blocks": 65536, 00:15:52.112 "uuid": "a000fa9a-7303-4d71-999d-53429f1a5cf7", 00:15:52.112 "assigned_rate_limits": { 00:15:52.112 "rw_ios_per_sec": 0, 00:15:52.112 "rw_mbytes_per_sec": 0, 00:15:52.112 "r_mbytes_per_sec": 0, 00:15:52.112 "w_mbytes_per_sec": 0 00:15:52.112 }, 00:15:52.112 "claimed": true, 00:15:52.112 "claim_type": "exclusive_write", 00:15:52.112 "zoned": false, 00:15:52.112 "supported_io_types": { 00:15:52.112 "read": true, 00:15:52.112 "write": true, 00:15:52.112 "unmap": true, 00:15:52.112 "flush": true, 00:15:52.112 "reset": true, 00:15:52.112 "nvme_admin": false, 00:15:52.112 "nvme_io": false, 00:15:52.112 "nvme_io_md": false, 00:15:52.112 "write_zeroes": true, 00:15:52.112 "zcopy": true, 00:15:52.112 "get_zone_info": false, 00:15:52.112 "zone_management": false, 00:15:52.112 "zone_append": false, 00:15:52.112 "compare": false, 00:15:52.112 "compare_and_write": false, 00:15:52.112 "abort": true, 00:15:52.112 "seek_hole": false, 00:15:52.112 "seek_data": false, 00:15:52.112 "copy": true, 00:15:52.112 "nvme_iov_md": false 00:15:52.112 }, 00:15:52.112 "memory_domains": [ 00:15:52.112 { 00:15:52.112 "dma_device_id": "system", 00:15:52.112 "dma_device_type": 1 00:15:52.112 }, 00:15:52.112 { 00:15:52.112 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:52.112 "dma_device_type": 2 00:15:52.112 } 00:15:52.112 ], 00:15:52.112 "driver_specific": {} 00:15:52.112 } 00:15:52.112 ] 00:15:52.112 13:35:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.112 13:35:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:52.112 13:35:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:15:52.112 13:35:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:52.112 13:35:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:52.112 13:35:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:52.112 13:35:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:52.112 13:35:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:52.112 13:35:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:52.112 13:35:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:52.112 13:35:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:52.112 13:35:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:52.112 13:35:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:52.112 13:35:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:52.112 13:35:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.112 13:35:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:52.112 13:35:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.112 13:35:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:52.112 "name": "Existed_Raid", 00:15:52.112 "uuid": "8f1f3610-ab9f-40b2-b3d6-64e285d0f20d", 00:15:52.112 "strip_size_kb": 64, 00:15:52.112 "state": "configuring", 00:15:52.112 "raid_level": "raid0", 00:15:52.112 "superblock": true, 00:15:52.112 "num_base_bdevs": 4, 00:15:52.112 "num_base_bdevs_discovered": 3, 00:15:52.112 "num_base_bdevs_operational": 4, 00:15:52.112 "base_bdevs_list": [ 00:15:52.112 { 00:15:52.112 "name": "BaseBdev1", 00:15:52.112 "uuid": "a000fa9a-7303-4d71-999d-53429f1a5cf7", 00:15:52.112 "is_configured": true, 00:15:52.112 "data_offset": 2048, 00:15:52.112 "data_size": 63488 00:15:52.112 }, 00:15:52.112 { 00:15:52.112 "name": null, 00:15:52.112 "uuid": "6d0e4e30-8840-49e6-b509-618d205d012d", 00:15:52.112 "is_configured": false, 00:15:52.112 "data_offset": 0, 00:15:52.112 "data_size": 63488 00:15:52.112 }, 00:15:52.112 { 00:15:52.112 "name": "BaseBdev3", 00:15:52.112 "uuid": "e6e8c7ca-79d8-439b-b3d3-a216bfe9e38f", 00:15:52.112 "is_configured": true, 00:15:52.112 "data_offset": 2048, 00:15:52.112 "data_size": 63488 00:15:52.112 }, 00:15:52.112 { 00:15:52.112 "name": "BaseBdev4", 00:15:52.112 "uuid": "b402af76-b9e9-4474-8938-80260e6dffc8", 00:15:52.112 "is_configured": true, 00:15:52.112 "data_offset": 2048, 00:15:52.112 "data_size": 63488 00:15:52.112 } 00:15:52.112 ] 00:15:52.112 }' 00:15:52.112 13:35:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:52.112 13:35:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:52.370 13:35:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:52.370 13:35:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:52.370 13:35:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.370 13:35:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:52.629 13:35:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.629 13:35:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:15:52.629 13:35:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:15:52.629 13:35:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.629 13:35:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:52.629 [2024-11-20 13:35:51.905452] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:52.629 13:35:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.629 13:35:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:15:52.629 13:35:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:52.629 13:35:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:52.629 13:35:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:52.629 13:35:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:52.629 13:35:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:52.629 13:35:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:52.629 13:35:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:52.629 13:35:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:52.629 13:35:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:52.629 13:35:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:52.629 13:35:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.629 13:35:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:52.629 13:35:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:52.629 13:35:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.629 13:35:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:52.630 "name": "Existed_Raid", 00:15:52.630 "uuid": "8f1f3610-ab9f-40b2-b3d6-64e285d0f20d", 00:15:52.630 "strip_size_kb": 64, 00:15:52.630 "state": "configuring", 00:15:52.630 "raid_level": "raid0", 00:15:52.630 "superblock": true, 00:15:52.630 "num_base_bdevs": 4, 00:15:52.630 "num_base_bdevs_discovered": 2, 00:15:52.630 "num_base_bdevs_operational": 4, 00:15:52.630 "base_bdevs_list": [ 00:15:52.630 { 00:15:52.630 "name": "BaseBdev1", 00:15:52.630 "uuid": "a000fa9a-7303-4d71-999d-53429f1a5cf7", 00:15:52.630 "is_configured": true, 00:15:52.630 "data_offset": 2048, 00:15:52.630 "data_size": 63488 00:15:52.630 }, 00:15:52.630 { 00:15:52.630 "name": null, 00:15:52.630 "uuid": "6d0e4e30-8840-49e6-b509-618d205d012d", 00:15:52.630 "is_configured": false, 00:15:52.630 "data_offset": 0, 00:15:52.630 "data_size": 63488 00:15:52.630 }, 00:15:52.630 { 00:15:52.630 "name": null, 00:15:52.630 "uuid": "e6e8c7ca-79d8-439b-b3d3-a216bfe9e38f", 00:15:52.630 "is_configured": false, 00:15:52.630 "data_offset": 0, 00:15:52.630 "data_size": 63488 00:15:52.630 }, 00:15:52.630 { 00:15:52.630 "name": "BaseBdev4", 00:15:52.630 "uuid": "b402af76-b9e9-4474-8938-80260e6dffc8", 00:15:52.630 "is_configured": true, 00:15:52.630 "data_offset": 2048, 00:15:52.630 "data_size": 63488 00:15:52.630 } 00:15:52.630 ] 00:15:52.630 }' 00:15:52.630 13:35:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:52.630 13:35:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:52.890 13:35:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:52.890 13:35:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:52.890 13:35:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.890 13:35:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:52.890 13:35:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.890 13:35:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:15:52.890 13:35:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:15:52.890 13:35:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.890 13:35:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:52.890 [2024-11-20 13:35:52.364859] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:52.890 13:35:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.890 13:35:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:15:52.890 13:35:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:52.890 13:35:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:52.890 13:35:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:52.890 13:35:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:52.890 13:35:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:52.890 13:35:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:52.890 13:35:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:52.890 13:35:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:52.890 13:35:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:53.150 13:35:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:53.150 13:35:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.150 13:35:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:53.150 13:35:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:53.150 13:35:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.150 13:35:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:53.150 "name": "Existed_Raid", 00:15:53.150 "uuid": "8f1f3610-ab9f-40b2-b3d6-64e285d0f20d", 00:15:53.150 "strip_size_kb": 64, 00:15:53.150 "state": "configuring", 00:15:53.150 "raid_level": "raid0", 00:15:53.150 "superblock": true, 00:15:53.150 "num_base_bdevs": 4, 00:15:53.150 "num_base_bdevs_discovered": 3, 00:15:53.150 "num_base_bdevs_operational": 4, 00:15:53.151 "base_bdevs_list": [ 00:15:53.151 { 00:15:53.151 "name": "BaseBdev1", 00:15:53.151 "uuid": "a000fa9a-7303-4d71-999d-53429f1a5cf7", 00:15:53.151 "is_configured": true, 00:15:53.151 "data_offset": 2048, 00:15:53.151 "data_size": 63488 00:15:53.151 }, 00:15:53.151 { 00:15:53.151 "name": null, 00:15:53.151 "uuid": "6d0e4e30-8840-49e6-b509-618d205d012d", 00:15:53.151 "is_configured": false, 00:15:53.151 "data_offset": 0, 00:15:53.151 "data_size": 63488 00:15:53.151 }, 00:15:53.151 { 00:15:53.151 "name": "BaseBdev3", 00:15:53.151 "uuid": "e6e8c7ca-79d8-439b-b3d3-a216bfe9e38f", 00:15:53.151 "is_configured": true, 00:15:53.151 "data_offset": 2048, 00:15:53.151 "data_size": 63488 00:15:53.151 }, 00:15:53.151 { 00:15:53.151 "name": "BaseBdev4", 00:15:53.151 "uuid": "b402af76-b9e9-4474-8938-80260e6dffc8", 00:15:53.151 "is_configured": true, 00:15:53.151 "data_offset": 2048, 00:15:53.151 "data_size": 63488 00:15:53.151 } 00:15:53.151 ] 00:15:53.151 }' 00:15:53.151 13:35:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:53.151 13:35:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:53.409 13:35:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:53.409 13:35:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:53.409 13:35:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.409 13:35:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:53.409 13:35:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.409 13:35:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:15:53.409 13:35:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:53.409 13:35:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.409 13:35:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:53.409 [2024-11-20 13:35:52.828276] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:53.667 13:35:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.667 13:35:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:15:53.667 13:35:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:53.667 13:35:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:53.667 13:35:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:53.667 13:35:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:53.667 13:35:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:53.667 13:35:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:53.667 13:35:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:53.667 13:35:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:53.667 13:35:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:53.667 13:35:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:53.667 13:35:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:53.667 13:35:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.667 13:35:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:53.667 13:35:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.667 13:35:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:53.667 "name": "Existed_Raid", 00:15:53.667 "uuid": "8f1f3610-ab9f-40b2-b3d6-64e285d0f20d", 00:15:53.667 "strip_size_kb": 64, 00:15:53.667 "state": "configuring", 00:15:53.667 "raid_level": "raid0", 00:15:53.667 "superblock": true, 00:15:53.667 "num_base_bdevs": 4, 00:15:53.667 "num_base_bdevs_discovered": 2, 00:15:53.667 "num_base_bdevs_operational": 4, 00:15:53.667 "base_bdevs_list": [ 00:15:53.667 { 00:15:53.667 "name": null, 00:15:53.667 "uuid": "a000fa9a-7303-4d71-999d-53429f1a5cf7", 00:15:53.667 "is_configured": false, 00:15:53.667 "data_offset": 0, 00:15:53.667 "data_size": 63488 00:15:53.667 }, 00:15:53.667 { 00:15:53.667 "name": null, 00:15:53.667 "uuid": "6d0e4e30-8840-49e6-b509-618d205d012d", 00:15:53.667 "is_configured": false, 00:15:53.667 "data_offset": 0, 00:15:53.667 "data_size": 63488 00:15:53.667 }, 00:15:53.667 { 00:15:53.667 "name": "BaseBdev3", 00:15:53.667 "uuid": "e6e8c7ca-79d8-439b-b3d3-a216bfe9e38f", 00:15:53.667 "is_configured": true, 00:15:53.668 "data_offset": 2048, 00:15:53.668 "data_size": 63488 00:15:53.668 }, 00:15:53.668 { 00:15:53.668 "name": "BaseBdev4", 00:15:53.668 "uuid": "b402af76-b9e9-4474-8938-80260e6dffc8", 00:15:53.668 "is_configured": true, 00:15:53.668 "data_offset": 2048, 00:15:53.668 "data_size": 63488 00:15:53.668 } 00:15:53.668 ] 00:15:53.668 }' 00:15:53.668 13:35:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:53.668 13:35:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:53.926 13:35:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:53.926 13:35:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:53.926 13:35:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.926 13:35:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:53.926 13:35:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.926 13:35:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:15:53.926 13:35:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:15:53.926 13:35:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.926 13:35:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:53.926 [2024-11-20 13:35:53.365179] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:53.926 13:35:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.926 13:35:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:15:53.926 13:35:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:53.926 13:35:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:53.926 13:35:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:53.926 13:35:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:53.926 13:35:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:53.926 13:35:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:53.926 13:35:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:53.926 13:35:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:53.926 13:35:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:53.926 13:35:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:53.926 13:35:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:53.926 13:35:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.926 13:35:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:53.926 13:35:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.926 13:35:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:53.926 "name": "Existed_Raid", 00:15:53.926 "uuid": "8f1f3610-ab9f-40b2-b3d6-64e285d0f20d", 00:15:53.926 "strip_size_kb": 64, 00:15:53.926 "state": "configuring", 00:15:53.926 "raid_level": "raid0", 00:15:53.926 "superblock": true, 00:15:53.926 "num_base_bdevs": 4, 00:15:53.926 "num_base_bdevs_discovered": 3, 00:15:53.926 "num_base_bdevs_operational": 4, 00:15:53.926 "base_bdevs_list": [ 00:15:53.926 { 00:15:53.926 "name": null, 00:15:53.926 "uuid": "a000fa9a-7303-4d71-999d-53429f1a5cf7", 00:15:53.926 "is_configured": false, 00:15:53.926 "data_offset": 0, 00:15:53.926 "data_size": 63488 00:15:53.926 }, 00:15:53.926 { 00:15:53.926 "name": "BaseBdev2", 00:15:53.926 "uuid": "6d0e4e30-8840-49e6-b509-618d205d012d", 00:15:53.926 "is_configured": true, 00:15:53.926 "data_offset": 2048, 00:15:53.926 "data_size": 63488 00:15:53.926 }, 00:15:53.926 { 00:15:53.926 "name": "BaseBdev3", 00:15:53.926 "uuid": "e6e8c7ca-79d8-439b-b3d3-a216bfe9e38f", 00:15:53.926 "is_configured": true, 00:15:53.926 "data_offset": 2048, 00:15:53.926 "data_size": 63488 00:15:53.926 }, 00:15:53.926 { 00:15:53.926 "name": "BaseBdev4", 00:15:53.926 "uuid": "b402af76-b9e9-4474-8938-80260e6dffc8", 00:15:53.926 "is_configured": true, 00:15:53.926 "data_offset": 2048, 00:15:53.926 "data_size": 63488 00:15:53.926 } 00:15:53.926 ] 00:15:53.926 }' 00:15:53.926 13:35:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:53.926 13:35:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:54.492 13:35:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:54.492 13:35:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.492 13:35:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:54.492 13:35:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:54.492 13:35:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.492 13:35:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:15:54.492 13:35:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:54.492 13:35:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:15:54.492 13:35:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.492 13:35:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:54.492 13:35:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.493 13:35:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u a000fa9a-7303-4d71-999d-53429f1a5cf7 00:15:54.493 13:35:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.493 13:35:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:54.493 [2024-11-20 13:35:53.906764] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:15:54.493 [2024-11-20 13:35:53.906987] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:15:54.493 [2024-11-20 13:35:53.907003] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:15:54.493 NewBaseBdev 00:15:54.493 [2024-11-20 13:35:53.907333] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:15:54.493 [2024-11-20 13:35:53.907487] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:15:54.493 [2024-11-20 13:35:53.907500] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:15:54.493 [2024-11-20 13:35:53.907623] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:54.493 13:35:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.493 13:35:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:15:54.493 13:35:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:15:54.493 13:35:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:54.493 13:35:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:54.493 13:35:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:54.493 13:35:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:54.493 13:35:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:54.493 13:35:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.493 13:35:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:54.493 13:35:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.493 13:35:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:15:54.493 13:35:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.493 13:35:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:54.493 [ 00:15:54.493 { 00:15:54.493 "name": "NewBaseBdev", 00:15:54.493 "aliases": [ 00:15:54.493 "a000fa9a-7303-4d71-999d-53429f1a5cf7" 00:15:54.493 ], 00:15:54.493 "product_name": "Malloc disk", 00:15:54.493 "block_size": 512, 00:15:54.493 "num_blocks": 65536, 00:15:54.493 "uuid": "a000fa9a-7303-4d71-999d-53429f1a5cf7", 00:15:54.493 "assigned_rate_limits": { 00:15:54.493 "rw_ios_per_sec": 0, 00:15:54.493 "rw_mbytes_per_sec": 0, 00:15:54.493 "r_mbytes_per_sec": 0, 00:15:54.493 "w_mbytes_per_sec": 0 00:15:54.493 }, 00:15:54.493 "claimed": true, 00:15:54.493 "claim_type": "exclusive_write", 00:15:54.493 "zoned": false, 00:15:54.493 "supported_io_types": { 00:15:54.493 "read": true, 00:15:54.493 "write": true, 00:15:54.493 "unmap": true, 00:15:54.493 "flush": true, 00:15:54.493 "reset": true, 00:15:54.493 "nvme_admin": false, 00:15:54.493 "nvme_io": false, 00:15:54.493 "nvme_io_md": false, 00:15:54.493 "write_zeroes": true, 00:15:54.493 "zcopy": true, 00:15:54.493 "get_zone_info": false, 00:15:54.493 "zone_management": false, 00:15:54.493 "zone_append": false, 00:15:54.493 "compare": false, 00:15:54.493 "compare_and_write": false, 00:15:54.493 "abort": true, 00:15:54.493 "seek_hole": false, 00:15:54.493 "seek_data": false, 00:15:54.493 "copy": true, 00:15:54.493 "nvme_iov_md": false 00:15:54.493 }, 00:15:54.493 "memory_domains": [ 00:15:54.493 { 00:15:54.493 "dma_device_id": "system", 00:15:54.493 "dma_device_type": 1 00:15:54.493 }, 00:15:54.493 { 00:15:54.493 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:54.493 "dma_device_type": 2 00:15:54.493 } 00:15:54.493 ], 00:15:54.493 "driver_specific": {} 00:15:54.493 } 00:15:54.493 ] 00:15:54.493 13:35:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.493 13:35:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:54.493 13:35:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:15:54.493 13:35:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:54.493 13:35:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:54.493 13:35:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:54.493 13:35:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:54.493 13:35:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:54.493 13:35:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:54.493 13:35:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:54.493 13:35:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:54.493 13:35:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:54.493 13:35:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:54.493 13:35:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:54.493 13:35:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.493 13:35:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:54.752 13:35:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.752 13:35:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:54.752 "name": "Existed_Raid", 00:15:54.752 "uuid": "8f1f3610-ab9f-40b2-b3d6-64e285d0f20d", 00:15:54.752 "strip_size_kb": 64, 00:15:54.752 "state": "online", 00:15:54.752 "raid_level": "raid0", 00:15:54.752 "superblock": true, 00:15:54.752 "num_base_bdevs": 4, 00:15:54.752 "num_base_bdevs_discovered": 4, 00:15:54.752 "num_base_bdevs_operational": 4, 00:15:54.752 "base_bdevs_list": [ 00:15:54.752 { 00:15:54.752 "name": "NewBaseBdev", 00:15:54.752 "uuid": "a000fa9a-7303-4d71-999d-53429f1a5cf7", 00:15:54.752 "is_configured": true, 00:15:54.752 "data_offset": 2048, 00:15:54.752 "data_size": 63488 00:15:54.752 }, 00:15:54.752 { 00:15:54.752 "name": "BaseBdev2", 00:15:54.752 "uuid": "6d0e4e30-8840-49e6-b509-618d205d012d", 00:15:54.752 "is_configured": true, 00:15:54.752 "data_offset": 2048, 00:15:54.752 "data_size": 63488 00:15:54.752 }, 00:15:54.752 { 00:15:54.752 "name": "BaseBdev3", 00:15:54.752 "uuid": "e6e8c7ca-79d8-439b-b3d3-a216bfe9e38f", 00:15:54.752 "is_configured": true, 00:15:54.752 "data_offset": 2048, 00:15:54.752 "data_size": 63488 00:15:54.752 }, 00:15:54.752 { 00:15:54.752 "name": "BaseBdev4", 00:15:54.752 "uuid": "b402af76-b9e9-4474-8938-80260e6dffc8", 00:15:54.752 "is_configured": true, 00:15:54.752 "data_offset": 2048, 00:15:54.752 "data_size": 63488 00:15:54.752 } 00:15:54.752 ] 00:15:54.752 }' 00:15:54.752 13:35:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:54.752 13:35:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:55.011 13:35:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:15:55.011 13:35:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:55.011 13:35:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:55.011 13:35:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:55.011 13:35:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:15:55.011 13:35:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:55.011 13:35:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:55.011 13:35:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:55.011 13:35:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.011 13:35:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:55.011 [2024-11-20 13:35:54.378786] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:55.011 13:35:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.011 13:35:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:55.011 "name": "Existed_Raid", 00:15:55.011 "aliases": [ 00:15:55.011 "8f1f3610-ab9f-40b2-b3d6-64e285d0f20d" 00:15:55.011 ], 00:15:55.011 "product_name": "Raid Volume", 00:15:55.011 "block_size": 512, 00:15:55.011 "num_blocks": 253952, 00:15:55.011 "uuid": "8f1f3610-ab9f-40b2-b3d6-64e285d0f20d", 00:15:55.011 "assigned_rate_limits": { 00:15:55.011 "rw_ios_per_sec": 0, 00:15:55.011 "rw_mbytes_per_sec": 0, 00:15:55.011 "r_mbytes_per_sec": 0, 00:15:55.011 "w_mbytes_per_sec": 0 00:15:55.011 }, 00:15:55.011 "claimed": false, 00:15:55.011 "zoned": false, 00:15:55.011 "supported_io_types": { 00:15:55.011 "read": true, 00:15:55.011 "write": true, 00:15:55.011 "unmap": true, 00:15:55.011 "flush": true, 00:15:55.011 "reset": true, 00:15:55.011 "nvme_admin": false, 00:15:55.011 "nvme_io": false, 00:15:55.011 "nvme_io_md": false, 00:15:55.011 "write_zeroes": true, 00:15:55.011 "zcopy": false, 00:15:55.011 "get_zone_info": false, 00:15:55.011 "zone_management": false, 00:15:55.011 "zone_append": false, 00:15:55.011 "compare": false, 00:15:55.011 "compare_and_write": false, 00:15:55.011 "abort": false, 00:15:55.011 "seek_hole": false, 00:15:55.011 "seek_data": false, 00:15:55.011 "copy": false, 00:15:55.011 "nvme_iov_md": false 00:15:55.011 }, 00:15:55.011 "memory_domains": [ 00:15:55.011 { 00:15:55.011 "dma_device_id": "system", 00:15:55.011 "dma_device_type": 1 00:15:55.011 }, 00:15:55.011 { 00:15:55.011 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:55.011 "dma_device_type": 2 00:15:55.011 }, 00:15:55.011 { 00:15:55.011 "dma_device_id": "system", 00:15:55.011 "dma_device_type": 1 00:15:55.011 }, 00:15:55.011 { 00:15:55.011 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:55.011 "dma_device_type": 2 00:15:55.011 }, 00:15:55.011 { 00:15:55.011 "dma_device_id": "system", 00:15:55.011 "dma_device_type": 1 00:15:55.011 }, 00:15:55.011 { 00:15:55.011 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:55.011 "dma_device_type": 2 00:15:55.011 }, 00:15:55.011 { 00:15:55.011 "dma_device_id": "system", 00:15:55.011 "dma_device_type": 1 00:15:55.011 }, 00:15:55.011 { 00:15:55.011 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:55.011 "dma_device_type": 2 00:15:55.011 } 00:15:55.011 ], 00:15:55.011 "driver_specific": { 00:15:55.011 "raid": { 00:15:55.011 "uuid": "8f1f3610-ab9f-40b2-b3d6-64e285d0f20d", 00:15:55.011 "strip_size_kb": 64, 00:15:55.011 "state": "online", 00:15:55.011 "raid_level": "raid0", 00:15:55.011 "superblock": true, 00:15:55.011 "num_base_bdevs": 4, 00:15:55.011 "num_base_bdevs_discovered": 4, 00:15:55.011 "num_base_bdevs_operational": 4, 00:15:55.011 "base_bdevs_list": [ 00:15:55.011 { 00:15:55.011 "name": "NewBaseBdev", 00:15:55.011 "uuid": "a000fa9a-7303-4d71-999d-53429f1a5cf7", 00:15:55.011 "is_configured": true, 00:15:55.011 "data_offset": 2048, 00:15:55.011 "data_size": 63488 00:15:55.011 }, 00:15:55.011 { 00:15:55.011 "name": "BaseBdev2", 00:15:55.011 "uuid": "6d0e4e30-8840-49e6-b509-618d205d012d", 00:15:55.011 "is_configured": true, 00:15:55.011 "data_offset": 2048, 00:15:55.011 "data_size": 63488 00:15:55.011 }, 00:15:55.011 { 00:15:55.011 "name": "BaseBdev3", 00:15:55.011 "uuid": "e6e8c7ca-79d8-439b-b3d3-a216bfe9e38f", 00:15:55.011 "is_configured": true, 00:15:55.011 "data_offset": 2048, 00:15:55.011 "data_size": 63488 00:15:55.011 }, 00:15:55.011 { 00:15:55.011 "name": "BaseBdev4", 00:15:55.011 "uuid": "b402af76-b9e9-4474-8938-80260e6dffc8", 00:15:55.011 "is_configured": true, 00:15:55.011 "data_offset": 2048, 00:15:55.011 "data_size": 63488 00:15:55.011 } 00:15:55.011 ] 00:15:55.011 } 00:15:55.011 } 00:15:55.011 }' 00:15:55.011 13:35:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:55.011 13:35:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:15:55.011 BaseBdev2 00:15:55.011 BaseBdev3 00:15:55.011 BaseBdev4' 00:15:55.011 13:35:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:55.011 13:35:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:55.012 13:35:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:55.012 13:35:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:15:55.012 13:35:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.012 13:35:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:55.012 13:35:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:55.271 13:35:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.271 13:35:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:55.271 13:35:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:55.271 13:35:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:55.271 13:35:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:55.271 13:35:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:55.271 13:35:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.271 13:35:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:55.271 13:35:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.271 13:35:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:55.271 13:35:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:55.271 13:35:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:55.271 13:35:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:55.271 13:35:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:55.271 13:35:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.271 13:35:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:55.271 13:35:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.271 13:35:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:55.271 13:35:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:55.271 13:35:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:55.271 13:35:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:15:55.271 13:35:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.271 13:35:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:55.271 13:35:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:55.271 13:35:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.271 13:35:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:55.271 13:35:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:55.271 13:35:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:55.271 13:35:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.271 13:35:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:55.271 [2024-11-20 13:35:54.702474] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:55.271 [2024-11-20 13:35:54.702508] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:55.271 [2024-11-20 13:35:54.702596] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:55.271 [2024-11-20 13:35:54.702669] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:55.271 [2024-11-20 13:35:54.702682] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:15:55.271 13:35:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.271 13:35:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 69810 00:15:55.271 13:35:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 69810 ']' 00:15:55.271 13:35:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 69810 00:15:55.271 13:35:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:15:55.271 13:35:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:55.271 13:35:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69810 00:15:55.271 13:35:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:55.271 13:35:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:55.271 13:35:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69810' 00:15:55.271 killing process with pid 69810 00:15:55.271 13:35:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 69810 00:15:55.271 [2024-11-20 13:35:54.749894] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:55.271 13:35:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 69810 00:15:55.840 [2024-11-20 13:35:55.155362] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:57.220 13:35:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:15:57.220 00:15:57.220 real 0m11.326s 00:15:57.220 user 0m17.880s 00:15:57.220 sys 0m2.307s 00:15:57.220 13:35:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:57.220 ************************************ 00:15:57.220 END TEST raid_state_function_test_sb 00:15:57.220 ************************************ 00:15:57.220 13:35:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:57.220 13:35:56 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 4 00:15:57.220 13:35:56 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:15:57.220 13:35:56 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:57.220 13:35:56 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:57.220 ************************************ 00:15:57.220 START TEST raid_superblock_test 00:15:57.220 ************************************ 00:15:57.220 13:35:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid0 4 00:15:57.220 13:35:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:15:57.220 13:35:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:15:57.220 13:35:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:15:57.220 13:35:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:15:57.220 13:35:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:15:57.220 13:35:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:15:57.220 13:35:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:15:57.220 13:35:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:15:57.220 13:35:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:15:57.220 13:35:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:15:57.220 13:35:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:15:57.220 13:35:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:15:57.220 13:35:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:15:57.220 13:35:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:15:57.220 13:35:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:15:57.220 13:35:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:15:57.220 13:35:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=70480 00:15:57.220 13:35:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:15:57.220 13:35:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 70480 00:15:57.220 13:35:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 70480 ']' 00:15:57.220 13:35:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:57.220 13:35:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:57.220 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:57.220 13:35:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:57.220 13:35:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:57.220 13:35:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:57.220 [2024-11-20 13:35:56.500054] Starting SPDK v25.01-pre git sha1 82b85d9ca / DPDK 24.03.0 initialization... 00:15:57.220 [2024-11-20 13:35:56.500188] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70480 ] 00:15:57.220 [2024-11-20 13:35:56.679628] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:57.479 [2024-11-20 13:35:56.797469] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:57.780 [2024-11-20 13:35:57.001987] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:57.780 [2024-11-20 13:35:57.002078] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:58.042 13:35:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:58.042 13:35:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:15:58.042 13:35:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:15:58.042 13:35:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:58.042 13:35:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:15:58.042 13:35:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:15:58.042 13:35:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:15:58.042 13:35:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:58.042 13:35:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:58.042 13:35:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:58.042 13:35:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:15:58.042 13:35:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.042 13:35:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:58.042 malloc1 00:15:58.042 13:35:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.042 13:35:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:58.042 13:35:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.042 13:35:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:58.042 [2024-11-20 13:35:57.408418] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:58.042 [2024-11-20 13:35:57.408493] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:58.042 [2024-11-20 13:35:57.408520] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:58.042 [2024-11-20 13:35:57.408532] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:58.042 [2024-11-20 13:35:57.410982] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:58.042 [2024-11-20 13:35:57.411028] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:58.042 pt1 00:15:58.042 13:35:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.042 13:35:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:58.042 13:35:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:58.042 13:35:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:15:58.042 13:35:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:15:58.042 13:35:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:15:58.042 13:35:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:58.043 13:35:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:58.043 13:35:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:58.043 13:35:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:15:58.043 13:35:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.043 13:35:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:58.043 malloc2 00:15:58.043 13:35:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.043 13:35:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:58.043 13:35:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.043 13:35:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:58.043 [2024-11-20 13:35:57.457987] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:58.043 [2024-11-20 13:35:57.458050] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:58.043 [2024-11-20 13:35:57.458092] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:58.043 [2024-11-20 13:35:57.458103] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:58.043 [2024-11-20 13:35:57.460526] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:58.043 [2024-11-20 13:35:57.460565] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:58.043 pt2 00:15:58.043 13:35:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.043 13:35:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:58.043 13:35:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:58.043 13:35:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:15:58.043 13:35:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:15:58.043 13:35:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:15:58.043 13:35:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:58.043 13:35:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:58.043 13:35:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:58.043 13:35:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:15:58.043 13:35:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.043 13:35:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:58.043 malloc3 00:15:58.043 13:35:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.043 13:35:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:58.043 13:35:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.043 13:35:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:58.043 [2024-11-20 13:35:57.521635] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:58.043 [2024-11-20 13:35:57.521695] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:58.043 [2024-11-20 13:35:57.521720] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:15:58.043 [2024-11-20 13:35:57.521732] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:58.043 [2024-11-20 13:35:57.524080] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:58.043 [2024-11-20 13:35:57.524113] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:58.043 pt3 00:15:58.304 13:35:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.304 13:35:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:58.304 13:35:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:58.304 13:35:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:15:58.304 13:35:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:15:58.304 13:35:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:15:58.304 13:35:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:58.304 13:35:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:58.304 13:35:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:58.304 13:35:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:15:58.304 13:35:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.304 13:35:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:58.304 malloc4 00:15:58.304 13:35:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.304 13:35:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:15:58.304 13:35:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.304 13:35:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:58.304 [2024-11-20 13:35:57.580632] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:15:58.304 [2024-11-20 13:35:57.580700] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:58.304 [2024-11-20 13:35:57.580725] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:15:58.304 [2024-11-20 13:35:57.580736] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:58.304 [2024-11-20 13:35:57.583101] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:58.304 [2024-11-20 13:35:57.583141] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:15:58.304 pt4 00:15:58.304 13:35:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.304 13:35:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:58.304 13:35:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:58.304 13:35:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:15:58.304 13:35:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.304 13:35:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:58.304 [2024-11-20 13:35:57.592646] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:58.304 [2024-11-20 13:35:57.594685] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:58.304 [2024-11-20 13:35:57.594780] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:58.304 [2024-11-20 13:35:57.594824] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:15:58.304 [2024-11-20 13:35:57.594998] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:15:58.304 [2024-11-20 13:35:57.595010] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:15:58.304 [2024-11-20 13:35:57.595283] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:15:58.304 [2024-11-20 13:35:57.595447] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:15:58.304 [2024-11-20 13:35:57.595469] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:15:58.304 [2024-11-20 13:35:57.595609] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:58.304 13:35:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.304 13:35:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:15:58.304 13:35:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:58.304 13:35:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:58.304 13:35:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:58.304 13:35:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:58.304 13:35:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:58.304 13:35:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:58.304 13:35:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:58.304 13:35:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:58.304 13:35:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:58.304 13:35:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:58.304 13:35:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.304 13:35:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:58.304 13:35:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:58.304 13:35:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.304 13:35:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:58.304 "name": "raid_bdev1", 00:15:58.304 "uuid": "5299477e-4cef-49b4-8b27-9b20b24c4343", 00:15:58.304 "strip_size_kb": 64, 00:15:58.304 "state": "online", 00:15:58.304 "raid_level": "raid0", 00:15:58.304 "superblock": true, 00:15:58.304 "num_base_bdevs": 4, 00:15:58.304 "num_base_bdevs_discovered": 4, 00:15:58.304 "num_base_bdevs_operational": 4, 00:15:58.305 "base_bdevs_list": [ 00:15:58.305 { 00:15:58.305 "name": "pt1", 00:15:58.305 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:58.305 "is_configured": true, 00:15:58.305 "data_offset": 2048, 00:15:58.305 "data_size": 63488 00:15:58.305 }, 00:15:58.305 { 00:15:58.305 "name": "pt2", 00:15:58.305 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:58.305 "is_configured": true, 00:15:58.305 "data_offset": 2048, 00:15:58.305 "data_size": 63488 00:15:58.305 }, 00:15:58.305 { 00:15:58.305 "name": "pt3", 00:15:58.305 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:58.305 "is_configured": true, 00:15:58.305 "data_offset": 2048, 00:15:58.305 "data_size": 63488 00:15:58.305 }, 00:15:58.305 { 00:15:58.305 "name": "pt4", 00:15:58.305 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:58.305 "is_configured": true, 00:15:58.305 "data_offset": 2048, 00:15:58.305 "data_size": 63488 00:15:58.305 } 00:15:58.305 ] 00:15:58.305 }' 00:15:58.305 13:35:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:58.305 13:35:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:58.564 13:35:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:15:58.564 13:35:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:15:58.564 13:35:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:58.564 13:35:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:58.564 13:35:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:58.564 13:35:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:58.564 13:35:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:58.564 13:35:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:58.564 13:35:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.564 13:35:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:58.564 [2024-11-20 13:35:58.020481] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:58.823 13:35:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.823 13:35:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:58.823 "name": "raid_bdev1", 00:15:58.823 "aliases": [ 00:15:58.823 "5299477e-4cef-49b4-8b27-9b20b24c4343" 00:15:58.823 ], 00:15:58.823 "product_name": "Raid Volume", 00:15:58.823 "block_size": 512, 00:15:58.823 "num_blocks": 253952, 00:15:58.823 "uuid": "5299477e-4cef-49b4-8b27-9b20b24c4343", 00:15:58.823 "assigned_rate_limits": { 00:15:58.823 "rw_ios_per_sec": 0, 00:15:58.823 "rw_mbytes_per_sec": 0, 00:15:58.823 "r_mbytes_per_sec": 0, 00:15:58.823 "w_mbytes_per_sec": 0 00:15:58.823 }, 00:15:58.823 "claimed": false, 00:15:58.823 "zoned": false, 00:15:58.823 "supported_io_types": { 00:15:58.823 "read": true, 00:15:58.823 "write": true, 00:15:58.823 "unmap": true, 00:15:58.823 "flush": true, 00:15:58.823 "reset": true, 00:15:58.823 "nvme_admin": false, 00:15:58.823 "nvme_io": false, 00:15:58.823 "nvme_io_md": false, 00:15:58.823 "write_zeroes": true, 00:15:58.823 "zcopy": false, 00:15:58.823 "get_zone_info": false, 00:15:58.823 "zone_management": false, 00:15:58.823 "zone_append": false, 00:15:58.823 "compare": false, 00:15:58.823 "compare_and_write": false, 00:15:58.823 "abort": false, 00:15:58.823 "seek_hole": false, 00:15:58.823 "seek_data": false, 00:15:58.823 "copy": false, 00:15:58.823 "nvme_iov_md": false 00:15:58.823 }, 00:15:58.823 "memory_domains": [ 00:15:58.823 { 00:15:58.823 "dma_device_id": "system", 00:15:58.823 "dma_device_type": 1 00:15:58.823 }, 00:15:58.823 { 00:15:58.823 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:58.823 "dma_device_type": 2 00:15:58.823 }, 00:15:58.823 { 00:15:58.823 "dma_device_id": "system", 00:15:58.823 "dma_device_type": 1 00:15:58.823 }, 00:15:58.823 { 00:15:58.824 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:58.824 "dma_device_type": 2 00:15:58.824 }, 00:15:58.824 { 00:15:58.824 "dma_device_id": "system", 00:15:58.824 "dma_device_type": 1 00:15:58.824 }, 00:15:58.824 { 00:15:58.824 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:58.824 "dma_device_type": 2 00:15:58.824 }, 00:15:58.824 { 00:15:58.824 "dma_device_id": "system", 00:15:58.824 "dma_device_type": 1 00:15:58.824 }, 00:15:58.824 { 00:15:58.824 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:58.824 "dma_device_type": 2 00:15:58.824 } 00:15:58.824 ], 00:15:58.824 "driver_specific": { 00:15:58.824 "raid": { 00:15:58.824 "uuid": "5299477e-4cef-49b4-8b27-9b20b24c4343", 00:15:58.824 "strip_size_kb": 64, 00:15:58.824 "state": "online", 00:15:58.824 "raid_level": "raid0", 00:15:58.824 "superblock": true, 00:15:58.824 "num_base_bdevs": 4, 00:15:58.824 "num_base_bdevs_discovered": 4, 00:15:58.824 "num_base_bdevs_operational": 4, 00:15:58.824 "base_bdevs_list": [ 00:15:58.824 { 00:15:58.824 "name": "pt1", 00:15:58.824 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:58.824 "is_configured": true, 00:15:58.824 "data_offset": 2048, 00:15:58.824 "data_size": 63488 00:15:58.824 }, 00:15:58.824 { 00:15:58.824 "name": "pt2", 00:15:58.824 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:58.824 "is_configured": true, 00:15:58.824 "data_offset": 2048, 00:15:58.824 "data_size": 63488 00:15:58.824 }, 00:15:58.824 { 00:15:58.824 "name": "pt3", 00:15:58.824 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:58.824 "is_configured": true, 00:15:58.824 "data_offset": 2048, 00:15:58.824 "data_size": 63488 00:15:58.824 }, 00:15:58.824 { 00:15:58.824 "name": "pt4", 00:15:58.824 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:58.824 "is_configured": true, 00:15:58.824 "data_offset": 2048, 00:15:58.824 "data_size": 63488 00:15:58.824 } 00:15:58.824 ] 00:15:58.824 } 00:15:58.824 } 00:15:58.824 }' 00:15:58.824 13:35:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:58.824 13:35:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:15:58.824 pt2 00:15:58.824 pt3 00:15:58.824 pt4' 00:15:58.824 13:35:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:58.824 13:35:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:58.824 13:35:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:58.824 13:35:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:15:58.824 13:35:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.824 13:35:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:58.824 13:35:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:58.824 13:35:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.824 13:35:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:58.824 13:35:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:58.824 13:35:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:58.824 13:35:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:15:58.824 13:35:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.824 13:35:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:58.824 13:35:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:58.824 13:35:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.824 13:35:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:58.824 13:35:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:58.824 13:35:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:58.824 13:35:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:15:58.824 13:35:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:58.824 13:35:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.824 13:35:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:58.824 13:35:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.824 13:35:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:58.824 13:35:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:58.824 13:35:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:58.824 13:35:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:15:58.824 13:35:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.824 13:35:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:58.824 13:35:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:58.824 13:35:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.085 13:35:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:59.085 13:35:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:59.085 13:35:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:59.085 13:35:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:15:59.085 13:35:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.085 13:35:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.085 [2024-11-20 13:35:58.319977] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:59.085 13:35:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.086 13:35:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=5299477e-4cef-49b4-8b27-9b20b24c4343 00:15:59.086 13:35:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 5299477e-4cef-49b4-8b27-9b20b24c4343 ']' 00:15:59.086 13:35:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:59.086 13:35:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.086 13:35:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.086 [2024-11-20 13:35:58.367654] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:59.086 [2024-11-20 13:35:58.367689] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:59.086 [2024-11-20 13:35:58.367771] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:59.086 [2024-11-20 13:35:58.367842] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:59.086 [2024-11-20 13:35:58.367860] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:15:59.086 13:35:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.086 13:35:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:59.086 13:35:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:15:59.086 13:35:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.086 13:35:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.086 13:35:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.086 13:35:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:15:59.086 13:35:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:15:59.086 13:35:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:59.086 13:35:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:15:59.086 13:35:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.086 13:35:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.086 13:35:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.086 13:35:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:59.086 13:35:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:15:59.086 13:35:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.086 13:35:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.086 13:35:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.086 13:35:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:59.086 13:35:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:15:59.086 13:35:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.086 13:35:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.086 13:35:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.086 13:35:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:59.086 13:35:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:15:59.086 13:35:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.086 13:35:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.086 13:35:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.086 13:35:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:15:59.086 13:35:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.086 13:35:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.086 13:35:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:15:59.086 13:35:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.086 13:35:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:15:59.086 13:35:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:15:59.086 13:35:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:15:59.086 13:35:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:15:59.086 13:35:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:15:59.086 13:35:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:59.086 13:35:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:15:59.086 13:35:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:59.086 13:35:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:15:59.086 13:35:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.086 13:35:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.086 [2024-11-20 13:35:58.531425] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:15:59.086 [2024-11-20 13:35:58.533515] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:15:59.086 [2024-11-20 13:35:58.533570] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:15:59.086 [2024-11-20 13:35:58.533605] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:15:59.086 [2024-11-20 13:35:58.533653] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:15:59.086 [2024-11-20 13:35:58.533703] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:15:59.086 [2024-11-20 13:35:58.533725] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:15:59.086 [2024-11-20 13:35:58.533746] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:15:59.086 [2024-11-20 13:35:58.533762] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:59.086 [2024-11-20 13:35:58.533778] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:15:59.086 request: 00:15:59.086 { 00:15:59.086 "name": "raid_bdev1", 00:15:59.086 "raid_level": "raid0", 00:15:59.086 "base_bdevs": [ 00:15:59.086 "malloc1", 00:15:59.086 "malloc2", 00:15:59.086 "malloc3", 00:15:59.086 "malloc4" 00:15:59.086 ], 00:15:59.086 "strip_size_kb": 64, 00:15:59.086 "superblock": false, 00:15:59.086 "method": "bdev_raid_create", 00:15:59.086 "req_id": 1 00:15:59.086 } 00:15:59.086 Got JSON-RPC error response 00:15:59.086 response: 00:15:59.086 { 00:15:59.086 "code": -17, 00:15:59.086 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:15:59.086 } 00:15:59.086 13:35:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:15:59.086 13:35:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:15:59.086 13:35:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:59.086 13:35:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:59.086 13:35:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:59.086 13:35:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:59.086 13:35:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:15:59.086 13:35:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.086 13:35:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.086 13:35:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.346 13:35:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:15:59.346 13:35:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:15:59.346 13:35:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:59.346 13:35:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.346 13:35:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.346 [2024-11-20 13:35:58.583325] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:59.346 [2024-11-20 13:35:58.583383] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:59.346 [2024-11-20 13:35:58.583404] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:15:59.346 [2024-11-20 13:35:58.583418] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:59.346 [2024-11-20 13:35:58.585831] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:59.346 [2024-11-20 13:35:58.585878] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:59.346 [2024-11-20 13:35:58.585953] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:15:59.346 [2024-11-20 13:35:58.586013] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:59.346 pt1 00:15:59.346 13:35:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.346 13:35:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:15:59.346 13:35:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:59.346 13:35:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:59.346 13:35:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:59.346 13:35:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:59.346 13:35:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:59.346 13:35:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:59.346 13:35:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:59.346 13:35:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:59.346 13:35:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:59.346 13:35:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:59.346 13:35:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.346 13:35:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:59.346 13:35:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.346 13:35:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.346 13:35:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:59.346 "name": "raid_bdev1", 00:15:59.346 "uuid": "5299477e-4cef-49b4-8b27-9b20b24c4343", 00:15:59.346 "strip_size_kb": 64, 00:15:59.346 "state": "configuring", 00:15:59.346 "raid_level": "raid0", 00:15:59.346 "superblock": true, 00:15:59.346 "num_base_bdevs": 4, 00:15:59.346 "num_base_bdevs_discovered": 1, 00:15:59.346 "num_base_bdevs_operational": 4, 00:15:59.346 "base_bdevs_list": [ 00:15:59.346 { 00:15:59.346 "name": "pt1", 00:15:59.346 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:59.346 "is_configured": true, 00:15:59.346 "data_offset": 2048, 00:15:59.346 "data_size": 63488 00:15:59.346 }, 00:15:59.346 { 00:15:59.346 "name": null, 00:15:59.347 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:59.347 "is_configured": false, 00:15:59.347 "data_offset": 2048, 00:15:59.347 "data_size": 63488 00:15:59.347 }, 00:15:59.347 { 00:15:59.347 "name": null, 00:15:59.347 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:59.347 "is_configured": false, 00:15:59.347 "data_offset": 2048, 00:15:59.347 "data_size": 63488 00:15:59.347 }, 00:15:59.347 { 00:15:59.347 "name": null, 00:15:59.347 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:59.347 "is_configured": false, 00:15:59.347 "data_offset": 2048, 00:15:59.347 "data_size": 63488 00:15:59.347 } 00:15:59.347 ] 00:15:59.347 }' 00:15:59.347 13:35:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:59.347 13:35:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.606 13:35:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:15:59.606 13:35:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:59.606 13:35:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.606 13:35:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.606 [2024-11-20 13:35:59.043053] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:59.606 [2024-11-20 13:35:59.043147] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:59.606 [2024-11-20 13:35:59.043170] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:15:59.606 [2024-11-20 13:35:59.043183] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:59.606 [2024-11-20 13:35:59.043617] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:59.606 [2024-11-20 13:35:59.043648] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:59.606 [2024-11-20 13:35:59.043729] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:59.606 [2024-11-20 13:35:59.043754] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:59.606 pt2 00:15:59.606 13:35:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.606 13:35:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:15:59.606 13:35:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.606 13:35:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.606 [2024-11-20 13:35:59.051042] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:15:59.606 13:35:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.606 13:35:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:15:59.606 13:35:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:59.606 13:35:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:59.606 13:35:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:59.606 13:35:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:59.606 13:35:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:59.606 13:35:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:59.606 13:35:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:59.606 13:35:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:59.606 13:35:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:59.606 13:35:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:59.606 13:35:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.606 13:35:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.606 13:35:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:59.606 13:35:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.867 13:35:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:59.867 "name": "raid_bdev1", 00:15:59.867 "uuid": "5299477e-4cef-49b4-8b27-9b20b24c4343", 00:15:59.867 "strip_size_kb": 64, 00:15:59.867 "state": "configuring", 00:15:59.867 "raid_level": "raid0", 00:15:59.867 "superblock": true, 00:15:59.867 "num_base_bdevs": 4, 00:15:59.867 "num_base_bdevs_discovered": 1, 00:15:59.867 "num_base_bdevs_operational": 4, 00:15:59.867 "base_bdevs_list": [ 00:15:59.867 { 00:15:59.867 "name": "pt1", 00:15:59.867 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:59.867 "is_configured": true, 00:15:59.867 "data_offset": 2048, 00:15:59.867 "data_size": 63488 00:15:59.867 }, 00:15:59.867 { 00:15:59.867 "name": null, 00:15:59.867 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:59.867 "is_configured": false, 00:15:59.867 "data_offset": 0, 00:15:59.867 "data_size": 63488 00:15:59.867 }, 00:15:59.867 { 00:15:59.867 "name": null, 00:15:59.867 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:59.867 "is_configured": false, 00:15:59.867 "data_offset": 2048, 00:15:59.867 "data_size": 63488 00:15:59.867 }, 00:15:59.867 { 00:15:59.867 "name": null, 00:15:59.867 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:59.867 "is_configured": false, 00:15:59.867 "data_offset": 2048, 00:15:59.867 "data_size": 63488 00:15:59.867 } 00:15:59.867 ] 00:15:59.867 }' 00:15:59.867 13:35:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:59.867 13:35:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:00.127 13:35:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:16:00.127 13:35:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:00.127 13:35:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:00.127 13:35:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.127 13:35:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:00.127 [2024-11-20 13:35:59.494463] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:00.127 [2024-11-20 13:35:59.494536] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:00.127 [2024-11-20 13:35:59.494559] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:16:00.127 [2024-11-20 13:35:59.494571] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:00.127 [2024-11-20 13:35:59.495017] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:00.127 [2024-11-20 13:35:59.495044] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:00.127 [2024-11-20 13:35:59.495141] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:00.127 [2024-11-20 13:35:59.495167] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:00.127 pt2 00:16:00.127 13:35:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.127 13:35:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:16:00.127 13:35:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:00.127 13:35:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:00.127 13:35:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.127 13:35:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:00.127 [2024-11-20 13:35:59.506436] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:00.127 [2024-11-20 13:35:59.506493] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:00.127 [2024-11-20 13:35:59.506514] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:16:00.127 [2024-11-20 13:35:59.506525] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:00.127 [2024-11-20 13:35:59.506896] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:00.127 [2024-11-20 13:35:59.506918] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:00.127 [2024-11-20 13:35:59.506982] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:16:00.127 [2024-11-20 13:35:59.507007] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:00.127 pt3 00:16:00.127 13:35:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.127 13:35:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:16:00.127 13:35:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:00.127 13:35:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:16:00.127 13:35:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.128 13:35:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:00.128 [2024-11-20 13:35:59.514406] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:16:00.128 [2024-11-20 13:35:59.514458] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:00.128 [2024-11-20 13:35:59.514476] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:16:00.128 [2024-11-20 13:35:59.514486] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:00.128 [2024-11-20 13:35:59.514839] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:00.128 [2024-11-20 13:35:59.514865] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:16:00.128 [2024-11-20 13:35:59.514927] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:16:00.128 [2024-11-20 13:35:59.514949] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:16:00.128 [2024-11-20 13:35:59.515098] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:16:00.128 [2024-11-20 13:35:59.515108] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:16:00.128 [2024-11-20 13:35:59.515355] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:16:00.128 [2024-11-20 13:35:59.515499] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:16:00.128 [2024-11-20 13:35:59.515513] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:16:00.128 [2024-11-20 13:35:59.515636] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:00.128 pt4 00:16:00.128 13:35:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.128 13:35:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:16:00.128 13:35:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:00.128 13:35:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:16:00.128 13:35:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:00.128 13:35:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:00.128 13:35:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:16:00.128 13:35:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:00.128 13:35:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:00.128 13:35:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:00.128 13:35:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:00.128 13:35:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:00.128 13:35:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:00.128 13:35:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:00.128 13:35:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:00.128 13:35:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.128 13:35:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:00.128 13:35:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.128 13:35:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:00.128 "name": "raid_bdev1", 00:16:00.128 "uuid": "5299477e-4cef-49b4-8b27-9b20b24c4343", 00:16:00.128 "strip_size_kb": 64, 00:16:00.128 "state": "online", 00:16:00.128 "raid_level": "raid0", 00:16:00.128 "superblock": true, 00:16:00.128 "num_base_bdevs": 4, 00:16:00.128 "num_base_bdevs_discovered": 4, 00:16:00.128 "num_base_bdevs_operational": 4, 00:16:00.128 "base_bdevs_list": [ 00:16:00.128 { 00:16:00.128 "name": "pt1", 00:16:00.128 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:00.128 "is_configured": true, 00:16:00.128 "data_offset": 2048, 00:16:00.128 "data_size": 63488 00:16:00.128 }, 00:16:00.128 { 00:16:00.128 "name": "pt2", 00:16:00.128 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:00.128 "is_configured": true, 00:16:00.128 "data_offset": 2048, 00:16:00.128 "data_size": 63488 00:16:00.128 }, 00:16:00.128 { 00:16:00.128 "name": "pt3", 00:16:00.128 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:00.128 "is_configured": true, 00:16:00.128 "data_offset": 2048, 00:16:00.128 "data_size": 63488 00:16:00.128 }, 00:16:00.128 { 00:16:00.128 "name": "pt4", 00:16:00.128 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:00.128 "is_configured": true, 00:16:00.128 "data_offset": 2048, 00:16:00.128 "data_size": 63488 00:16:00.128 } 00:16:00.128 ] 00:16:00.128 }' 00:16:00.128 13:35:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:00.128 13:35:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:00.695 13:35:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:16:00.695 13:35:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:16:00.695 13:35:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:00.695 13:35:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:00.695 13:35:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:16:00.695 13:35:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:00.695 13:35:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:00.695 13:35:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:00.695 13:35:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.695 13:35:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:00.695 [2024-11-20 13:35:59.930756] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:00.695 13:35:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.695 13:35:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:00.695 "name": "raid_bdev1", 00:16:00.695 "aliases": [ 00:16:00.695 "5299477e-4cef-49b4-8b27-9b20b24c4343" 00:16:00.695 ], 00:16:00.695 "product_name": "Raid Volume", 00:16:00.695 "block_size": 512, 00:16:00.695 "num_blocks": 253952, 00:16:00.695 "uuid": "5299477e-4cef-49b4-8b27-9b20b24c4343", 00:16:00.695 "assigned_rate_limits": { 00:16:00.695 "rw_ios_per_sec": 0, 00:16:00.695 "rw_mbytes_per_sec": 0, 00:16:00.695 "r_mbytes_per_sec": 0, 00:16:00.695 "w_mbytes_per_sec": 0 00:16:00.695 }, 00:16:00.695 "claimed": false, 00:16:00.695 "zoned": false, 00:16:00.695 "supported_io_types": { 00:16:00.695 "read": true, 00:16:00.695 "write": true, 00:16:00.695 "unmap": true, 00:16:00.695 "flush": true, 00:16:00.695 "reset": true, 00:16:00.695 "nvme_admin": false, 00:16:00.695 "nvme_io": false, 00:16:00.695 "nvme_io_md": false, 00:16:00.695 "write_zeroes": true, 00:16:00.695 "zcopy": false, 00:16:00.695 "get_zone_info": false, 00:16:00.695 "zone_management": false, 00:16:00.695 "zone_append": false, 00:16:00.695 "compare": false, 00:16:00.695 "compare_and_write": false, 00:16:00.695 "abort": false, 00:16:00.695 "seek_hole": false, 00:16:00.695 "seek_data": false, 00:16:00.695 "copy": false, 00:16:00.695 "nvme_iov_md": false 00:16:00.695 }, 00:16:00.695 "memory_domains": [ 00:16:00.695 { 00:16:00.695 "dma_device_id": "system", 00:16:00.695 "dma_device_type": 1 00:16:00.695 }, 00:16:00.695 { 00:16:00.695 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:00.695 "dma_device_type": 2 00:16:00.695 }, 00:16:00.695 { 00:16:00.695 "dma_device_id": "system", 00:16:00.695 "dma_device_type": 1 00:16:00.695 }, 00:16:00.695 { 00:16:00.695 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:00.695 "dma_device_type": 2 00:16:00.695 }, 00:16:00.695 { 00:16:00.695 "dma_device_id": "system", 00:16:00.695 "dma_device_type": 1 00:16:00.695 }, 00:16:00.695 { 00:16:00.695 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:00.695 "dma_device_type": 2 00:16:00.695 }, 00:16:00.695 { 00:16:00.695 "dma_device_id": "system", 00:16:00.695 "dma_device_type": 1 00:16:00.695 }, 00:16:00.695 { 00:16:00.695 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:00.695 "dma_device_type": 2 00:16:00.695 } 00:16:00.695 ], 00:16:00.695 "driver_specific": { 00:16:00.695 "raid": { 00:16:00.695 "uuid": "5299477e-4cef-49b4-8b27-9b20b24c4343", 00:16:00.695 "strip_size_kb": 64, 00:16:00.695 "state": "online", 00:16:00.695 "raid_level": "raid0", 00:16:00.695 "superblock": true, 00:16:00.695 "num_base_bdevs": 4, 00:16:00.695 "num_base_bdevs_discovered": 4, 00:16:00.695 "num_base_bdevs_operational": 4, 00:16:00.695 "base_bdevs_list": [ 00:16:00.695 { 00:16:00.695 "name": "pt1", 00:16:00.695 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:00.695 "is_configured": true, 00:16:00.695 "data_offset": 2048, 00:16:00.695 "data_size": 63488 00:16:00.695 }, 00:16:00.695 { 00:16:00.695 "name": "pt2", 00:16:00.695 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:00.695 "is_configured": true, 00:16:00.695 "data_offset": 2048, 00:16:00.695 "data_size": 63488 00:16:00.695 }, 00:16:00.695 { 00:16:00.695 "name": "pt3", 00:16:00.695 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:00.695 "is_configured": true, 00:16:00.695 "data_offset": 2048, 00:16:00.695 "data_size": 63488 00:16:00.695 }, 00:16:00.695 { 00:16:00.695 "name": "pt4", 00:16:00.695 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:00.695 "is_configured": true, 00:16:00.695 "data_offset": 2048, 00:16:00.695 "data_size": 63488 00:16:00.695 } 00:16:00.695 ] 00:16:00.695 } 00:16:00.695 } 00:16:00.695 }' 00:16:00.696 13:35:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:00.696 13:36:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:16:00.696 pt2 00:16:00.696 pt3 00:16:00.696 pt4' 00:16:00.696 13:36:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:00.696 13:36:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:00.696 13:36:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:00.696 13:36:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:16:00.696 13:36:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:00.696 13:36:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.696 13:36:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:00.696 13:36:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.696 13:36:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:00.696 13:36:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:00.696 13:36:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:00.696 13:36:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:00.696 13:36:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:16:00.696 13:36:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.696 13:36:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:00.696 13:36:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.696 13:36:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:00.696 13:36:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:00.696 13:36:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:00.696 13:36:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:16:00.696 13:36:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.696 13:36:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:00.696 13:36:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:00.696 13:36:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.696 13:36:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:00.696 13:36:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:00.954 13:36:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:00.954 13:36:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:16:00.954 13:36:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.954 13:36:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:00.954 13:36:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:00.954 13:36:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.954 13:36:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:00.954 13:36:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:00.954 13:36:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:16:00.954 13:36:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:00.954 13:36:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.954 13:36:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:00.954 [2024-11-20 13:36:00.238730] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:00.954 13:36:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.954 13:36:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 5299477e-4cef-49b4-8b27-9b20b24c4343 '!=' 5299477e-4cef-49b4-8b27-9b20b24c4343 ']' 00:16:00.954 13:36:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:16:00.954 13:36:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:00.954 13:36:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:16:00.954 13:36:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 70480 00:16:00.954 13:36:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 70480 ']' 00:16:00.954 13:36:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 70480 00:16:00.954 13:36:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:16:00.954 13:36:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:00.954 13:36:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70480 00:16:00.954 13:36:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:00.954 killing process with pid 70480 00:16:00.954 13:36:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:00.954 13:36:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70480' 00:16:00.954 13:36:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 70480 00:16:00.954 13:36:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 70480 00:16:00.954 [2024-11-20 13:36:00.329311] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:00.954 [2024-11-20 13:36:00.329406] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:00.954 [2024-11-20 13:36:00.329482] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:00.954 [2024-11-20 13:36:00.329493] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:16:01.520 [2024-11-20 13:36:00.730833] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:02.455 13:36:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:16:02.455 00:16:02.455 real 0m5.477s 00:16:02.455 user 0m7.780s 00:16:02.455 sys 0m1.079s 00:16:02.455 13:36:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:02.455 13:36:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:02.455 ************************************ 00:16:02.455 END TEST raid_superblock_test 00:16:02.455 ************************************ 00:16:02.455 13:36:01 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 4 read 00:16:02.713 13:36:01 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:16:02.713 13:36:01 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:02.713 13:36:01 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:02.713 ************************************ 00:16:02.713 START TEST raid_read_error_test 00:16:02.713 ************************************ 00:16:02.713 13:36:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 4 read 00:16:02.713 13:36:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:16:02.713 13:36:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:16:02.713 13:36:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:16:02.713 13:36:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:16:02.713 13:36:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:16:02.713 13:36:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:16:02.713 13:36:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:16:02.713 13:36:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:16:02.713 13:36:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:16:02.713 13:36:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:16:02.713 13:36:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:16:02.713 13:36:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:16:02.713 13:36:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:16:02.713 13:36:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:16:02.713 13:36:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:16:02.713 13:36:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:16:02.713 13:36:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:16:02.713 13:36:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:16:02.713 13:36:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:16:02.713 13:36:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:16:02.713 13:36:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:16:02.713 13:36:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:16:02.713 13:36:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:16:02.713 13:36:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:16:02.714 13:36:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:16:02.714 13:36:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:16:02.714 13:36:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:16:02.714 13:36:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:16:02.714 13:36:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.tYI2Y0lPR6 00:16:02.714 13:36:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=70749 00:16:02.714 13:36:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 70749 00:16:02.714 13:36:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:16:02.714 13:36:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 70749 ']' 00:16:02.714 13:36:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:02.714 13:36:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:02.714 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:02.714 13:36:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:02.714 13:36:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:02.714 13:36:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:02.714 [2024-11-20 13:36:02.065373] Starting SPDK v25.01-pre git sha1 82b85d9ca / DPDK 24.03.0 initialization... 00:16:02.714 [2024-11-20 13:36:02.065501] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70749 ] 00:16:02.972 [2024-11-20 13:36:02.246614] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:02.972 [2024-11-20 13:36:02.362753] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:03.231 [2024-11-20 13:36:02.582384] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:03.231 [2024-11-20 13:36:02.582452] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:03.490 13:36:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:03.490 13:36:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:16:03.490 13:36:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:16:03.490 13:36:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:16:03.490 13:36:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.490 13:36:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:03.490 BaseBdev1_malloc 00:16:03.490 13:36:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.490 13:36:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:16:03.490 13:36:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.490 13:36:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:03.490 true 00:16:03.490 13:36:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.490 13:36:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:16:03.490 13:36:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.490 13:36:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:03.490 [2024-11-20 13:36:02.971856] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:16:03.490 [2024-11-20 13:36:02.971917] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:03.490 [2024-11-20 13:36:02.971939] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:16:03.490 [2024-11-20 13:36:02.971953] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:03.490 [2024-11-20 13:36:02.974287] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:03.490 [2024-11-20 13:36:02.974340] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:03.751 BaseBdev1 00:16:03.751 13:36:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.751 13:36:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:16:03.751 13:36:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:16:03.751 13:36:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.751 13:36:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:03.751 BaseBdev2_malloc 00:16:03.751 13:36:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.751 13:36:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:16:03.751 13:36:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.751 13:36:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:03.751 true 00:16:03.751 13:36:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.751 13:36:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:16:03.751 13:36:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.751 13:36:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:03.751 [2024-11-20 13:36:03.040646] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:16:03.751 [2024-11-20 13:36:03.040711] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:03.751 [2024-11-20 13:36:03.040731] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:16:03.751 [2024-11-20 13:36:03.040745] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:03.751 [2024-11-20 13:36:03.043227] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:03.751 [2024-11-20 13:36:03.043273] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:03.751 BaseBdev2 00:16:03.751 13:36:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.751 13:36:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:16:03.751 13:36:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:16:03.751 13:36:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.751 13:36:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:03.751 BaseBdev3_malloc 00:16:03.751 13:36:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.751 13:36:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:16:03.751 13:36:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.751 13:36:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:03.751 true 00:16:03.751 13:36:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.751 13:36:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:16:03.751 13:36:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.751 13:36:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:03.751 [2024-11-20 13:36:03.121856] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:16:03.751 [2024-11-20 13:36:03.121916] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:03.751 [2024-11-20 13:36:03.121937] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:16:03.751 [2024-11-20 13:36:03.121951] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:03.751 [2024-11-20 13:36:03.124308] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:03.751 [2024-11-20 13:36:03.124353] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:16:03.751 BaseBdev3 00:16:03.751 13:36:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.751 13:36:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:16:03.751 13:36:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:16:03.751 13:36:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.751 13:36:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:03.751 BaseBdev4_malloc 00:16:03.751 13:36:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.751 13:36:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:16:03.751 13:36:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.751 13:36:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:03.751 true 00:16:03.751 13:36:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.751 13:36:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:16:03.751 13:36:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.751 13:36:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:03.751 [2024-11-20 13:36:03.186801] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:16:03.751 [2024-11-20 13:36:03.186856] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:03.751 [2024-11-20 13:36:03.186876] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:16:03.751 [2024-11-20 13:36:03.186889] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:03.751 [2024-11-20 13:36:03.189215] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:03.751 [2024-11-20 13:36:03.189375] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:16:03.751 BaseBdev4 00:16:03.751 13:36:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.751 13:36:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:16:03.751 13:36:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.751 13:36:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:03.751 [2024-11-20 13:36:03.198864] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:03.751 [2024-11-20 13:36:03.200949] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:03.751 [2024-11-20 13:36:03.201163] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:03.751 [2024-11-20 13:36:03.201239] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:03.752 [2024-11-20 13:36:03.201446] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:16:03.752 [2024-11-20 13:36:03.201464] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:16:03.752 [2024-11-20 13:36:03.201715] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:16:03.752 [2024-11-20 13:36:03.201869] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:16:03.752 [2024-11-20 13:36:03.201881] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:16:03.752 [2024-11-20 13:36:03.202037] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:03.752 13:36:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.752 13:36:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:16:03.752 13:36:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:03.752 13:36:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:03.752 13:36:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:16:03.752 13:36:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:03.752 13:36:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:03.752 13:36:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:03.752 13:36:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:03.752 13:36:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:03.752 13:36:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:03.752 13:36:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:03.752 13:36:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:03.752 13:36:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.752 13:36:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:03.752 13:36:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.010 13:36:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:04.010 "name": "raid_bdev1", 00:16:04.010 "uuid": "32c43bb7-ef86-45b7-9cc0-1aa0f3e8d280", 00:16:04.010 "strip_size_kb": 64, 00:16:04.010 "state": "online", 00:16:04.010 "raid_level": "raid0", 00:16:04.010 "superblock": true, 00:16:04.010 "num_base_bdevs": 4, 00:16:04.010 "num_base_bdevs_discovered": 4, 00:16:04.010 "num_base_bdevs_operational": 4, 00:16:04.010 "base_bdevs_list": [ 00:16:04.010 { 00:16:04.011 "name": "BaseBdev1", 00:16:04.011 "uuid": "2c0f5776-821d-5754-a3b4-4df65f69e24c", 00:16:04.011 "is_configured": true, 00:16:04.011 "data_offset": 2048, 00:16:04.011 "data_size": 63488 00:16:04.011 }, 00:16:04.011 { 00:16:04.011 "name": "BaseBdev2", 00:16:04.011 "uuid": "74a49303-b8f7-5ec9-8086-c6e3cce354e0", 00:16:04.011 "is_configured": true, 00:16:04.011 "data_offset": 2048, 00:16:04.011 "data_size": 63488 00:16:04.011 }, 00:16:04.011 { 00:16:04.011 "name": "BaseBdev3", 00:16:04.011 "uuid": "b5641841-b933-5e11-9e36-8a39eff8cf3c", 00:16:04.011 "is_configured": true, 00:16:04.011 "data_offset": 2048, 00:16:04.011 "data_size": 63488 00:16:04.011 }, 00:16:04.011 { 00:16:04.011 "name": "BaseBdev4", 00:16:04.011 "uuid": "d5819447-c54d-5ebd-97f6-dfc221c948ef", 00:16:04.011 "is_configured": true, 00:16:04.011 "data_offset": 2048, 00:16:04.011 "data_size": 63488 00:16:04.011 } 00:16:04.011 ] 00:16:04.011 }' 00:16:04.011 13:36:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:04.011 13:36:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:04.269 13:36:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:16:04.269 13:36:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:16:04.269 [2024-11-20 13:36:03.715421] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:16:05.206 13:36:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:16:05.206 13:36:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.206 13:36:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:05.206 13:36:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.206 13:36:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:16:05.206 13:36:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:16:05.206 13:36:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:16:05.206 13:36:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:16:05.206 13:36:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:05.206 13:36:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:05.206 13:36:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:16:05.206 13:36:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:05.206 13:36:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:05.206 13:36:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:05.206 13:36:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:05.206 13:36:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:05.206 13:36:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:05.206 13:36:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:05.206 13:36:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.206 13:36:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:05.206 13:36:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:05.206 13:36:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.465 13:36:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:05.465 "name": "raid_bdev1", 00:16:05.465 "uuid": "32c43bb7-ef86-45b7-9cc0-1aa0f3e8d280", 00:16:05.465 "strip_size_kb": 64, 00:16:05.465 "state": "online", 00:16:05.465 "raid_level": "raid0", 00:16:05.465 "superblock": true, 00:16:05.465 "num_base_bdevs": 4, 00:16:05.465 "num_base_bdevs_discovered": 4, 00:16:05.465 "num_base_bdevs_operational": 4, 00:16:05.465 "base_bdevs_list": [ 00:16:05.465 { 00:16:05.465 "name": "BaseBdev1", 00:16:05.465 "uuid": "2c0f5776-821d-5754-a3b4-4df65f69e24c", 00:16:05.465 "is_configured": true, 00:16:05.465 "data_offset": 2048, 00:16:05.465 "data_size": 63488 00:16:05.465 }, 00:16:05.465 { 00:16:05.465 "name": "BaseBdev2", 00:16:05.465 "uuid": "74a49303-b8f7-5ec9-8086-c6e3cce354e0", 00:16:05.465 "is_configured": true, 00:16:05.465 "data_offset": 2048, 00:16:05.465 "data_size": 63488 00:16:05.465 }, 00:16:05.465 { 00:16:05.465 "name": "BaseBdev3", 00:16:05.465 "uuid": "b5641841-b933-5e11-9e36-8a39eff8cf3c", 00:16:05.465 "is_configured": true, 00:16:05.465 "data_offset": 2048, 00:16:05.465 "data_size": 63488 00:16:05.465 }, 00:16:05.465 { 00:16:05.465 "name": "BaseBdev4", 00:16:05.465 "uuid": "d5819447-c54d-5ebd-97f6-dfc221c948ef", 00:16:05.465 "is_configured": true, 00:16:05.465 "data_offset": 2048, 00:16:05.465 "data_size": 63488 00:16:05.465 } 00:16:05.465 ] 00:16:05.465 }' 00:16:05.465 13:36:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:05.465 13:36:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:05.724 13:36:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:05.724 13:36:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.724 13:36:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:05.724 [2024-11-20 13:36:05.091999] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:05.724 [2024-11-20 13:36:05.092034] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:05.724 [2024-11-20 13:36:05.094718] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:05.724 [2024-11-20 13:36:05.094782] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:05.724 [2024-11-20 13:36:05.094827] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:05.724 [2024-11-20 13:36:05.094841] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:16:05.724 { 00:16:05.724 "results": [ 00:16:05.724 { 00:16:05.724 "job": "raid_bdev1", 00:16:05.724 "core_mask": "0x1", 00:16:05.724 "workload": "randrw", 00:16:05.724 "percentage": 50, 00:16:05.724 "status": "finished", 00:16:05.724 "queue_depth": 1, 00:16:05.724 "io_size": 131072, 00:16:05.724 "runtime": 1.376781, 00:16:05.724 "iops": 16263.298229711188, 00:16:05.724 "mibps": 2032.9122787138986, 00:16:05.724 "io_failed": 1, 00:16:05.724 "io_timeout": 0, 00:16:05.724 "avg_latency_us": 84.8495254329214, 00:16:05.724 "min_latency_us": 26.730923694779115, 00:16:05.724 "max_latency_us": 1381.7831325301204 00:16:05.724 } 00:16:05.724 ], 00:16:05.724 "core_count": 1 00:16:05.724 } 00:16:05.724 13:36:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.724 13:36:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 70749 00:16:05.724 13:36:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 70749 ']' 00:16:05.724 13:36:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 70749 00:16:05.724 13:36:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:16:05.724 13:36:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:05.724 13:36:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70749 00:16:05.724 13:36:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:05.724 13:36:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:05.724 killing process with pid 70749 00:16:05.724 13:36:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70749' 00:16:05.724 13:36:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 70749 00:16:05.724 [2024-11-20 13:36:05.130940] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:05.724 13:36:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 70749 00:16:05.985 [2024-11-20 13:36:05.466603] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:07.364 13:36:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.tYI2Y0lPR6 00:16:07.364 13:36:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:16:07.364 13:36:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:16:07.364 13:36:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:16:07.364 13:36:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:16:07.364 13:36:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:07.364 13:36:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:16:07.364 13:36:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:16:07.364 00:16:07.364 real 0m4.752s 00:16:07.364 user 0m5.552s 00:16:07.364 sys 0m0.625s 00:16:07.364 13:36:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:07.364 13:36:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:07.364 ************************************ 00:16:07.364 END TEST raid_read_error_test 00:16:07.364 ************************************ 00:16:07.364 13:36:06 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 4 write 00:16:07.364 13:36:06 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:16:07.364 13:36:06 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:07.364 13:36:06 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:07.364 ************************************ 00:16:07.364 START TEST raid_write_error_test 00:16:07.364 ************************************ 00:16:07.364 13:36:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 4 write 00:16:07.364 13:36:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:16:07.364 13:36:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:16:07.364 13:36:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:16:07.364 13:36:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:16:07.364 13:36:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:16:07.364 13:36:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:16:07.364 13:36:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:16:07.364 13:36:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:16:07.364 13:36:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:16:07.364 13:36:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:16:07.364 13:36:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:16:07.364 13:36:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:16:07.364 13:36:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:16:07.364 13:36:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:16:07.364 13:36:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:16:07.364 13:36:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:16:07.364 13:36:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:16:07.364 13:36:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:16:07.364 13:36:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:16:07.364 13:36:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:16:07.364 13:36:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:16:07.364 13:36:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:16:07.364 13:36:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:16:07.364 13:36:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:16:07.364 13:36:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:16:07.364 13:36:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:16:07.364 13:36:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:16:07.364 13:36:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:16:07.364 13:36:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.nj0IKxP61N 00:16:07.364 13:36:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=70889 00:16:07.364 13:36:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 70889 00:16:07.364 13:36:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:16:07.364 13:36:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 70889 ']' 00:16:07.364 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:07.364 13:36:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:07.364 13:36:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:07.364 13:36:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:07.364 13:36:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:07.364 13:36:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:07.622 [2024-11-20 13:36:06.910188] Starting SPDK v25.01-pre git sha1 82b85d9ca / DPDK 24.03.0 initialization... 00:16:07.622 [2024-11-20 13:36:06.910331] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70889 ] 00:16:07.622 [2024-11-20 13:36:07.094118] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:07.881 [2024-11-20 13:36:07.219305] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:08.139 [2024-11-20 13:36:07.436508] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:08.139 [2024-11-20 13:36:07.436575] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:08.398 13:36:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:08.398 13:36:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:16:08.398 13:36:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:16:08.398 13:36:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:16:08.398 13:36:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.398 13:36:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:08.399 BaseBdev1_malloc 00:16:08.399 13:36:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.399 13:36:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:16:08.399 13:36:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.399 13:36:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:08.399 true 00:16:08.399 13:36:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.399 13:36:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:16:08.399 13:36:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.399 13:36:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:08.399 [2024-11-20 13:36:07.812250] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:16:08.399 [2024-11-20 13:36:07.812310] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:08.399 [2024-11-20 13:36:07.812334] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:16:08.399 [2024-11-20 13:36:07.812348] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:08.399 [2024-11-20 13:36:07.814858] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:08.399 [2024-11-20 13:36:07.815069] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:08.399 BaseBdev1 00:16:08.399 13:36:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.399 13:36:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:16:08.399 13:36:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:16:08.399 13:36:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.399 13:36:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:08.399 BaseBdev2_malloc 00:16:08.399 13:36:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.399 13:36:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:16:08.399 13:36:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.399 13:36:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:08.399 true 00:16:08.399 13:36:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.399 13:36:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:16:08.399 13:36:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.399 13:36:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:08.399 [2024-11-20 13:36:07.880971] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:16:08.399 [2024-11-20 13:36:07.881031] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:08.399 [2024-11-20 13:36:07.881052] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:16:08.399 [2024-11-20 13:36:07.881083] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:08.659 [2024-11-20 13:36:07.883664] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:08.659 [2024-11-20 13:36:07.883711] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:08.659 BaseBdev2 00:16:08.659 13:36:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.659 13:36:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:16:08.659 13:36:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:16:08.659 13:36:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.659 13:36:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:08.659 BaseBdev3_malloc 00:16:08.659 13:36:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.659 13:36:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:16:08.659 13:36:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.659 13:36:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:08.659 true 00:16:08.659 13:36:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.659 13:36:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:16:08.659 13:36:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.659 13:36:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:08.659 [2024-11-20 13:36:07.961852] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:16:08.659 [2024-11-20 13:36:07.962027] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:08.659 [2024-11-20 13:36:07.962078] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:16:08.659 [2024-11-20 13:36:07.962096] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:08.659 [2024-11-20 13:36:07.964589] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:08.659 [2024-11-20 13:36:07.964634] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:16:08.659 BaseBdev3 00:16:08.659 13:36:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.659 13:36:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:16:08.659 13:36:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:16:08.659 13:36:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.659 13:36:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:08.659 BaseBdev4_malloc 00:16:08.659 13:36:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.659 13:36:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:16:08.659 13:36:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.659 13:36:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:08.659 true 00:16:08.659 13:36:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.659 13:36:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:16:08.659 13:36:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.659 13:36:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:08.659 [2024-11-20 13:36:08.031961] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:16:08.659 [2024-11-20 13:36:08.032020] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:08.659 [2024-11-20 13:36:08.032041] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:16:08.659 [2024-11-20 13:36:08.032066] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:08.659 [2024-11-20 13:36:08.034529] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:08.660 [2024-11-20 13:36:08.034703] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:16:08.660 BaseBdev4 00:16:08.660 13:36:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.660 13:36:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:16:08.660 13:36:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.660 13:36:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:08.660 [2024-11-20 13:36:08.044008] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:08.660 [2024-11-20 13:36:08.046200] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:08.660 [2024-11-20 13:36:08.046415] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:08.660 [2024-11-20 13:36:08.046523] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:08.660 [2024-11-20 13:36:08.046842] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:16:08.660 [2024-11-20 13:36:08.046959] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:16:08.660 [2024-11-20 13:36:08.047279] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:16:08.660 [2024-11-20 13:36:08.047487] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:16:08.660 [2024-11-20 13:36:08.047529] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:16:08.660 [2024-11-20 13:36:08.047817] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:08.660 13:36:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.660 13:36:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:16:08.660 13:36:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:08.660 13:36:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:08.660 13:36:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:16:08.660 13:36:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:08.660 13:36:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:08.660 13:36:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:08.660 13:36:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:08.660 13:36:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:08.660 13:36:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:08.660 13:36:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:08.660 13:36:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:08.660 13:36:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.660 13:36:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:08.660 13:36:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.660 13:36:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:08.660 "name": "raid_bdev1", 00:16:08.660 "uuid": "0fc28278-8465-40a0-a720-d90e61308187", 00:16:08.660 "strip_size_kb": 64, 00:16:08.660 "state": "online", 00:16:08.660 "raid_level": "raid0", 00:16:08.660 "superblock": true, 00:16:08.660 "num_base_bdevs": 4, 00:16:08.660 "num_base_bdevs_discovered": 4, 00:16:08.660 "num_base_bdevs_operational": 4, 00:16:08.660 "base_bdevs_list": [ 00:16:08.660 { 00:16:08.660 "name": "BaseBdev1", 00:16:08.660 "uuid": "b8097b30-dc64-5fe2-84a0-f5b6367516da", 00:16:08.660 "is_configured": true, 00:16:08.660 "data_offset": 2048, 00:16:08.660 "data_size": 63488 00:16:08.660 }, 00:16:08.660 { 00:16:08.660 "name": "BaseBdev2", 00:16:08.660 "uuid": "02617599-8720-55b5-be78-45a0ceeafbb9", 00:16:08.660 "is_configured": true, 00:16:08.660 "data_offset": 2048, 00:16:08.660 "data_size": 63488 00:16:08.660 }, 00:16:08.660 { 00:16:08.660 "name": "BaseBdev3", 00:16:08.660 "uuid": "b7f33b0d-fb77-53cd-a282-faeeab133539", 00:16:08.660 "is_configured": true, 00:16:08.660 "data_offset": 2048, 00:16:08.660 "data_size": 63488 00:16:08.660 }, 00:16:08.660 { 00:16:08.660 "name": "BaseBdev4", 00:16:08.660 "uuid": "2546a435-2b14-5587-9b18-a3a6c5174616", 00:16:08.660 "is_configured": true, 00:16:08.660 "data_offset": 2048, 00:16:08.660 "data_size": 63488 00:16:08.660 } 00:16:08.660 ] 00:16:08.660 }' 00:16:08.660 13:36:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:08.660 13:36:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:09.232 13:36:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:16:09.232 13:36:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:16:09.232 [2024-11-20 13:36:08.557080] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:16:10.169 13:36:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:16:10.169 13:36:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.169 13:36:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:10.169 13:36:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.169 13:36:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:16:10.169 13:36:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:16:10.169 13:36:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:16:10.169 13:36:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:16:10.169 13:36:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:10.169 13:36:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:10.169 13:36:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:16:10.169 13:36:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:10.169 13:36:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:10.169 13:36:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:10.169 13:36:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:10.169 13:36:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:10.169 13:36:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:10.169 13:36:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:10.169 13:36:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.169 13:36:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:10.169 13:36:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:10.169 13:36:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.169 13:36:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:10.169 "name": "raid_bdev1", 00:16:10.169 "uuid": "0fc28278-8465-40a0-a720-d90e61308187", 00:16:10.169 "strip_size_kb": 64, 00:16:10.169 "state": "online", 00:16:10.169 "raid_level": "raid0", 00:16:10.169 "superblock": true, 00:16:10.169 "num_base_bdevs": 4, 00:16:10.169 "num_base_bdevs_discovered": 4, 00:16:10.169 "num_base_bdevs_operational": 4, 00:16:10.169 "base_bdevs_list": [ 00:16:10.169 { 00:16:10.169 "name": "BaseBdev1", 00:16:10.169 "uuid": "b8097b30-dc64-5fe2-84a0-f5b6367516da", 00:16:10.169 "is_configured": true, 00:16:10.169 "data_offset": 2048, 00:16:10.169 "data_size": 63488 00:16:10.169 }, 00:16:10.169 { 00:16:10.169 "name": "BaseBdev2", 00:16:10.169 "uuid": "02617599-8720-55b5-be78-45a0ceeafbb9", 00:16:10.169 "is_configured": true, 00:16:10.169 "data_offset": 2048, 00:16:10.169 "data_size": 63488 00:16:10.169 }, 00:16:10.169 { 00:16:10.169 "name": "BaseBdev3", 00:16:10.169 "uuid": "b7f33b0d-fb77-53cd-a282-faeeab133539", 00:16:10.169 "is_configured": true, 00:16:10.169 "data_offset": 2048, 00:16:10.169 "data_size": 63488 00:16:10.169 }, 00:16:10.169 { 00:16:10.169 "name": "BaseBdev4", 00:16:10.169 "uuid": "2546a435-2b14-5587-9b18-a3a6c5174616", 00:16:10.169 "is_configured": true, 00:16:10.169 "data_offset": 2048, 00:16:10.169 "data_size": 63488 00:16:10.169 } 00:16:10.169 ] 00:16:10.169 }' 00:16:10.169 13:36:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:10.169 13:36:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:10.428 13:36:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:10.428 13:36:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.428 13:36:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:10.428 [2024-11-20 13:36:09.904190] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:10.428 [2024-11-20 13:36:09.904230] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:10.428 [2024-11-20 13:36:09.906923] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:10.428 [2024-11-20 13:36:09.906989] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:10.428 [2024-11-20 13:36:09.907034] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:10.428 [2024-11-20 13:36:09.907048] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:16:10.428 { 00:16:10.428 "results": [ 00:16:10.428 { 00:16:10.428 "job": "raid_bdev1", 00:16:10.428 "core_mask": "0x1", 00:16:10.428 "workload": "randrw", 00:16:10.428 "percentage": 50, 00:16:10.428 "status": "finished", 00:16:10.428 "queue_depth": 1, 00:16:10.428 "io_size": 131072, 00:16:10.428 "runtime": 1.346625, 00:16:10.428 "iops": 15896.036387264458, 00:16:10.428 "mibps": 1987.0045484080572, 00:16:10.428 "io_failed": 1, 00:16:10.428 "io_timeout": 0, 00:16:10.428 "avg_latency_us": 86.83516899381522, 00:16:10.428 "min_latency_us": 27.142168674698794, 00:16:10.428 "max_latency_us": 1408.1028112449799 00:16:10.428 } 00:16:10.428 ], 00:16:10.428 "core_count": 1 00:16:10.428 } 00:16:10.428 13:36:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.428 13:36:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 70889 00:16:10.429 13:36:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 70889 ']' 00:16:10.429 13:36:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 70889 00:16:10.429 13:36:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:16:10.688 13:36:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:10.688 13:36:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70889 00:16:10.688 13:36:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:10.688 13:36:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:10.688 killing process with pid 70889 00:16:10.688 13:36:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70889' 00:16:10.688 13:36:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 70889 00:16:10.688 [2024-11-20 13:36:09.958718] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:10.688 13:36:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 70889 00:16:10.948 [2024-11-20 13:36:10.285662] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:12.394 13:36:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.nj0IKxP61N 00:16:12.394 13:36:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:16:12.394 13:36:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:16:12.394 13:36:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.74 00:16:12.394 13:36:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:16:12.394 13:36:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:12.394 13:36:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:16:12.394 13:36:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.74 != \0\.\0\0 ]] 00:16:12.394 00:16:12.394 real 0m4.757s 00:16:12.394 user 0m5.528s 00:16:12.394 sys 0m0.664s 00:16:12.394 13:36:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:12.394 13:36:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:12.394 ************************************ 00:16:12.394 END TEST raid_write_error_test 00:16:12.394 ************************************ 00:16:12.394 13:36:11 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:16:12.394 13:36:11 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 4 false 00:16:12.394 13:36:11 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:16:12.394 13:36:11 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:12.394 13:36:11 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:12.394 ************************************ 00:16:12.394 START TEST raid_state_function_test 00:16:12.394 ************************************ 00:16:12.394 13:36:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 4 false 00:16:12.394 13:36:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:16:12.394 13:36:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:16:12.394 13:36:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:16:12.394 13:36:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:16:12.394 13:36:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:16:12.394 13:36:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:12.394 13:36:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:16:12.394 13:36:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:12.394 13:36:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:12.394 13:36:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:16:12.394 13:36:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:12.394 13:36:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:12.394 13:36:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:16:12.394 13:36:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:12.394 13:36:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:12.394 13:36:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:16:12.394 13:36:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:12.394 13:36:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:12.394 13:36:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:16:12.394 13:36:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:16:12.394 13:36:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:16:12.394 13:36:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:16:12.394 13:36:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:16:12.394 13:36:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:16:12.394 13:36:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:16:12.394 13:36:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:16:12.394 13:36:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:16:12.394 13:36:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:16:12.394 13:36:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:16:12.394 13:36:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=71037 00:16:12.394 13:36:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:16:12.394 13:36:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 71037' 00:16:12.394 Process raid pid: 71037 00:16:12.394 13:36:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 71037 00:16:12.394 13:36:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 71037 ']' 00:16:12.395 13:36:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:12.395 13:36:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:12.395 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:12.395 13:36:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:12.395 13:36:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:12.395 13:36:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:12.395 [2024-11-20 13:36:11.725831] Starting SPDK v25.01-pre git sha1 82b85d9ca / DPDK 24.03.0 initialization... 00:16:12.395 [2024-11-20 13:36:11.725956] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:12.653 [2024-11-20 13:36:11.909095] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:12.653 [2024-11-20 13:36:12.030344] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:12.912 [2024-11-20 13:36:12.249517] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:12.912 [2024-11-20 13:36:12.249607] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:13.335 13:36:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:13.335 13:36:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:16:13.335 13:36:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:13.335 13:36:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.335 13:36:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:13.335 [2024-11-20 13:36:12.567238] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:13.335 [2024-11-20 13:36:12.567298] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:13.335 [2024-11-20 13:36:12.567309] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:13.335 [2024-11-20 13:36:12.567322] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:13.335 [2024-11-20 13:36:12.567330] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:13.336 [2024-11-20 13:36:12.567342] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:13.336 [2024-11-20 13:36:12.567349] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:13.336 [2024-11-20 13:36:12.567361] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:13.336 13:36:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.336 13:36:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:16:13.336 13:36:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:13.336 13:36:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:13.336 13:36:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:16:13.336 13:36:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:13.336 13:36:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:13.336 13:36:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:13.336 13:36:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:13.336 13:36:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:13.336 13:36:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:13.336 13:36:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:13.336 13:36:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.336 13:36:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:13.336 13:36:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:13.336 13:36:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.336 13:36:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:13.336 "name": "Existed_Raid", 00:16:13.336 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:13.336 "strip_size_kb": 64, 00:16:13.336 "state": "configuring", 00:16:13.336 "raid_level": "concat", 00:16:13.336 "superblock": false, 00:16:13.336 "num_base_bdevs": 4, 00:16:13.336 "num_base_bdevs_discovered": 0, 00:16:13.336 "num_base_bdevs_operational": 4, 00:16:13.336 "base_bdevs_list": [ 00:16:13.336 { 00:16:13.336 "name": "BaseBdev1", 00:16:13.336 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:13.336 "is_configured": false, 00:16:13.336 "data_offset": 0, 00:16:13.336 "data_size": 0 00:16:13.336 }, 00:16:13.336 { 00:16:13.336 "name": "BaseBdev2", 00:16:13.336 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:13.336 "is_configured": false, 00:16:13.336 "data_offset": 0, 00:16:13.336 "data_size": 0 00:16:13.336 }, 00:16:13.336 { 00:16:13.336 "name": "BaseBdev3", 00:16:13.336 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:13.336 "is_configured": false, 00:16:13.336 "data_offset": 0, 00:16:13.336 "data_size": 0 00:16:13.336 }, 00:16:13.336 { 00:16:13.336 "name": "BaseBdev4", 00:16:13.336 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:13.336 "is_configured": false, 00:16:13.336 "data_offset": 0, 00:16:13.336 "data_size": 0 00:16:13.336 } 00:16:13.336 ] 00:16:13.336 }' 00:16:13.336 13:36:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:13.336 13:36:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:13.595 13:36:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:13.595 13:36:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.595 13:36:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:13.595 [2024-11-20 13:36:12.938771] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:13.595 [2024-11-20 13:36:12.938820] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:16:13.595 13:36:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.595 13:36:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:13.595 13:36:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.595 13:36:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:13.595 [2024-11-20 13:36:12.950732] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:13.595 [2024-11-20 13:36:12.950786] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:13.595 [2024-11-20 13:36:12.950797] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:13.595 [2024-11-20 13:36:12.950810] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:13.595 [2024-11-20 13:36:12.950819] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:13.595 [2024-11-20 13:36:12.950831] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:13.595 [2024-11-20 13:36:12.950839] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:13.595 [2024-11-20 13:36:12.950852] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:13.595 13:36:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.595 13:36:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:16:13.595 13:36:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.595 13:36:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:13.595 [2024-11-20 13:36:12.999127] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:13.595 BaseBdev1 00:16:13.595 13:36:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.595 13:36:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:16:13.595 13:36:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:16:13.595 13:36:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:13.595 13:36:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:13.595 13:36:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:13.595 13:36:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:13.595 13:36:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:13.595 13:36:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.595 13:36:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:13.595 13:36:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.595 13:36:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:13.595 13:36:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.595 13:36:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:13.595 [ 00:16:13.595 { 00:16:13.595 "name": "BaseBdev1", 00:16:13.595 "aliases": [ 00:16:13.595 "429f9017-a198-4bef-a40e-29428f41e72b" 00:16:13.595 ], 00:16:13.595 "product_name": "Malloc disk", 00:16:13.595 "block_size": 512, 00:16:13.595 "num_blocks": 65536, 00:16:13.595 "uuid": "429f9017-a198-4bef-a40e-29428f41e72b", 00:16:13.595 "assigned_rate_limits": { 00:16:13.595 "rw_ios_per_sec": 0, 00:16:13.595 "rw_mbytes_per_sec": 0, 00:16:13.595 "r_mbytes_per_sec": 0, 00:16:13.595 "w_mbytes_per_sec": 0 00:16:13.595 }, 00:16:13.595 "claimed": true, 00:16:13.595 "claim_type": "exclusive_write", 00:16:13.595 "zoned": false, 00:16:13.595 "supported_io_types": { 00:16:13.595 "read": true, 00:16:13.595 "write": true, 00:16:13.595 "unmap": true, 00:16:13.595 "flush": true, 00:16:13.595 "reset": true, 00:16:13.595 "nvme_admin": false, 00:16:13.595 "nvme_io": false, 00:16:13.595 "nvme_io_md": false, 00:16:13.595 "write_zeroes": true, 00:16:13.595 "zcopy": true, 00:16:13.595 "get_zone_info": false, 00:16:13.595 "zone_management": false, 00:16:13.595 "zone_append": false, 00:16:13.595 "compare": false, 00:16:13.595 "compare_and_write": false, 00:16:13.595 "abort": true, 00:16:13.595 "seek_hole": false, 00:16:13.595 "seek_data": false, 00:16:13.595 "copy": true, 00:16:13.595 "nvme_iov_md": false 00:16:13.595 }, 00:16:13.595 "memory_domains": [ 00:16:13.595 { 00:16:13.595 "dma_device_id": "system", 00:16:13.595 "dma_device_type": 1 00:16:13.595 }, 00:16:13.595 { 00:16:13.595 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:13.595 "dma_device_type": 2 00:16:13.595 } 00:16:13.595 ], 00:16:13.595 "driver_specific": {} 00:16:13.595 } 00:16:13.595 ] 00:16:13.595 13:36:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.595 13:36:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:13.595 13:36:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:16:13.595 13:36:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:13.595 13:36:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:13.595 13:36:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:16:13.595 13:36:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:13.595 13:36:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:13.595 13:36:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:13.595 13:36:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:13.596 13:36:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:13.596 13:36:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:13.596 13:36:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:13.596 13:36:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:13.596 13:36:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.596 13:36:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:13.596 13:36:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.854 13:36:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:13.854 "name": "Existed_Raid", 00:16:13.854 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:13.854 "strip_size_kb": 64, 00:16:13.854 "state": "configuring", 00:16:13.854 "raid_level": "concat", 00:16:13.854 "superblock": false, 00:16:13.854 "num_base_bdevs": 4, 00:16:13.854 "num_base_bdevs_discovered": 1, 00:16:13.855 "num_base_bdevs_operational": 4, 00:16:13.855 "base_bdevs_list": [ 00:16:13.855 { 00:16:13.855 "name": "BaseBdev1", 00:16:13.855 "uuid": "429f9017-a198-4bef-a40e-29428f41e72b", 00:16:13.855 "is_configured": true, 00:16:13.855 "data_offset": 0, 00:16:13.855 "data_size": 65536 00:16:13.855 }, 00:16:13.855 { 00:16:13.855 "name": "BaseBdev2", 00:16:13.855 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:13.855 "is_configured": false, 00:16:13.855 "data_offset": 0, 00:16:13.855 "data_size": 0 00:16:13.855 }, 00:16:13.855 { 00:16:13.855 "name": "BaseBdev3", 00:16:13.855 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:13.855 "is_configured": false, 00:16:13.855 "data_offset": 0, 00:16:13.855 "data_size": 0 00:16:13.855 }, 00:16:13.855 { 00:16:13.855 "name": "BaseBdev4", 00:16:13.855 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:13.855 "is_configured": false, 00:16:13.855 "data_offset": 0, 00:16:13.855 "data_size": 0 00:16:13.855 } 00:16:13.855 ] 00:16:13.855 }' 00:16:13.855 13:36:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:13.855 13:36:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:14.113 13:36:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:14.113 13:36:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.113 13:36:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:14.113 [2024-11-20 13:36:13.518459] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:14.114 [2024-11-20 13:36:13.518521] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:16:14.114 13:36:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.114 13:36:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:14.114 13:36:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.114 13:36:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:14.114 [2024-11-20 13:36:13.526525] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:14.114 [2024-11-20 13:36:13.528841] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:14.114 [2024-11-20 13:36:13.528906] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:14.114 [2024-11-20 13:36:13.528919] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:14.114 [2024-11-20 13:36:13.528934] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:14.114 [2024-11-20 13:36:13.528942] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:14.114 [2024-11-20 13:36:13.528953] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:14.114 13:36:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.114 13:36:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:16:14.114 13:36:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:14.114 13:36:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:16:14.114 13:36:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:14.114 13:36:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:14.114 13:36:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:16:14.114 13:36:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:14.114 13:36:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:14.114 13:36:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:14.114 13:36:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:14.114 13:36:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:14.114 13:36:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:14.114 13:36:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:14.114 13:36:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:14.114 13:36:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.114 13:36:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:14.114 13:36:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.114 13:36:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:14.114 "name": "Existed_Raid", 00:16:14.114 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:14.114 "strip_size_kb": 64, 00:16:14.114 "state": "configuring", 00:16:14.114 "raid_level": "concat", 00:16:14.114 "superblock": false, 00:16:14.114 "num_base_bdevs": 4, 00:16:14.114 "num_base_bdevs_discovered": 1, 00:16:14.114 "num_base_bdevs_operational": 4, 00:16:14.114 "base_bdevs_list": [ 00:16:14.114 { 00:16:14.114 "name": "BaseBdev1", 00:16:14.114 "uuid": "429f9017-a198-4bef-a40e-29428f41e72b", 00:16:14.114 "is_configured": true, 00:16:14.114 "data_offset": 0, 00:16:14.114 "data_size": 65536 00:16:14.114 }, 00:16:14.114 { 00:16:14.114 "name": "BaseBdev2", 00:16:14.114 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:14.114 "is_configured": false, 00:16:14.114 "data_offset": 0, 00:16:14.114 "data_size": 0 00:16:14.114 }, 00:16:14.114 { 00:16:14.114 "name": "BaseBdev3", 00:16:14.114 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:14.114 "is_configured": false, 00:16:14.114 "data_offset": 0, 00:16:14.114 "data_size": 0 00:16:14.114 }, 00:16:14.114 { 00:16:14.114 "name": "BaseBdev4", 00:16:14.114 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:14.114 "is_configured": false, 00:16:14.114 "data_offset": 0, 00:16:14.114 "data_size": 0 00:16:14.114 } 00:16:14.114 ] 00:16:14.114 }' 00:16:14.114 13:36:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:14.114 13:36:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:14.682 13:36:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:16:14.682 13:36:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.682 13:36:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:14.682 [2024-11-20 13:36:14.018334] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:14.682 BaseBdev2 00:16:14.682 13:36:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.682 13:36:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:16:14.682 13:36:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:16:14.682 13:36:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:14.682 13:36:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:14.682 13:36:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:14.682 13:36:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:14.682 13:36:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:14.682 13:36:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.682 13:36:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:14.682 13:36:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.682 13:36:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:14.682 13:36:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.682 13:36:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:14.682 [ 00:16:14.682 { 00:16:14.682 "name": "BaseBdev2", 00:16:14.682 "aliases": [ 00:16:14.682 "d42765e3-9227-4eeb-a6d4-a2da140d3cf7" 00:16:14.682 ], 00:16:14.682 "product_name": "Malloc disk", 00:16:14.682 "block_size": 512, 00:16:14.682 "num_blocks": 65536, 00:16:14.682 "uuid": "d42765e3-9227-4eeb-a6d4-a2da140d3cf7", 00:16:14.682 "assigned_rate_limits": { 00:16:14.682 "rw_ios_per_sec": 0, 00:16:14.682 "rw_mbytes_per_sec": 0, 00:16:14.682 "r_mbytes_per_sec": 0, 00:16:14.682 "w_mbytes_per_sec": 0 00:16:14.682 }, 00:16:14.682 "claimed": true, 00:16:14.682 "claim_type": "exclusive_write", 00:16:14.682 "zoned": false, 00:16:14.682 "supported_io_types": { 00:16:14.682 "read": true, 00:16:14.682 "write": true, 00:16:14.682 "unmap": true, 00:16:14.682 "flush": true, 00:16:14.682 "reset": true, 00:16:14.682 "nvme_admin": false, 00:16:14.682 "nvme_io": false, 00:16:14.682 "nvme_io_md": false, 00:16:14.682 "write_zeroes": true, 00:16:14.682 "zcopy": true, 00:16:14.682 "get_zone_info": false, 00:16:14.682 "zone_management": false, 00:16:14.682 "zone_append": false, 00:16:14.682 "compare": false, 00:16:14.682 "compare_and_write": false, 00:16:14.682 "abort": true, 00:16:14.682 "seek_hole": false, 00:16:14.682 "seek_data": false, 00:16:14.682 "copy": true, 00:16:14.682 "nvme_iov_md": false 00:16:14.682 }, 00:16:14.682 "memory_domains": [ 00:16:14.682 { 00:16:14.682 "dma_device_id": "system", 00:16:14.682 "dma_device_type": 1 00:16:14.682 }, 00:16:14.682 { 00:16:14.682 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:14.682 "dma_device_type": 2 00:16:14.682 } 00:16:14.682 ], 00:16:14.682 "driver_specific": {} 00:16:14.682 } 00:16:14.682 ] 00:16:14.682 13:36:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.682 13:36:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:14.682 13:36:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:14.682 13:36:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:14.682 13:36:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:16:14.682 13:36:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:14.682 13:36:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:14.682 13:36:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:16:14.682 13:36:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:14.682 13:36:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:14.682 13:36:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:14.682 13:36:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:14.682 13:36:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:14.682 13:36:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:14.682 13:36:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:14.682 13:36:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.682 13:36:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:14.682 13:36:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:14.682 13:36:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.682 13:36:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:14.682 "name": "Existed_Raid", 00:16:14.682 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:14.682 "strip_size_kb": 64, 00:16:14.682 "state": "configuring", 00:16:14.682 "raid_level": "concat", 00:16:14.682 "superblock": false, 00:16:14.682 "num_base_bdevs": 4, 00:16:14.682 "num_base_bdevs_discovered": 2, 00:16:14.682 "num_base_bdevs_operational": 4, 00:16:14.682 "base_bdevs_list": [ 00:16:14.682 { 00:16:14.682 "name": "BaseBdev1", 00:16:14.682 "uuid": "429f9017-a198-4bef-a40e-29428f41e72b", 00:16:14.682 "is_configured": true, 00:16:14.682 "data_offset": 0, 00:16:14.682 "data_size": 65536 00:16:14.682 }, 00:16:14.682 { 00:16:14.682 "name": "BaseBdev2", 00:16:14.682 "uuid": "d42765e3-9227-4eeb-a6d4-a2da140d3cf7", 00:16:14.682 "is_configured": true, 00:16:14.682 "data_offset": 0, 00:16:14.682 "data_size": 65536 00:16:14.682 }, 00:16:14.682 { 00:16:14.682 "name": "BaseBdev3", 00:16:14.682 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:14.682 "is_configured": false, 00:16:14.682 "data_offset": 0, 00:16:14.682 "data_size": 0 00:16:14.682 }, 00:16:14.682 { 00:16:14.682 "name": "BaseBdev4", 00:16:14.682 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:14.682 "is_configured": false, 00:16:14.682 "data_offset": 0, 00:16:14.682 "data_size": 0 00:16:14.682 } 00:16:14.682 ] 00:16:14.682 }' 00:16:14.682 13:36:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:14.682 13:36:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:15.249 13:36:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:16:15.249 13:36:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.249 13:36:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:15.249 [2024-11-20 13:36:14.537578] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:15.249 BaseBdev3 00:16:15.249 13:36:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.249 13:36:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:16:15.249 13:36:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:16:15.249 13:36:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:15.249 13:36:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:15.249 13:36:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:15.249 13:36:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:15.249 13:36:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:15.249 13:36:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.249 13:36:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:15.249 13:36:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.249 13:36:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:15.249 13:36:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.249 13:36:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:15.249 [ 00:16:15.249 { 00:16:15.249 "name": "BaseBdev3", 00:16:15.249 "aliases": [ 00:16:15.249 "a3c102f9-e06a-48e8-b7f7-04da20a22ae5" 00:16:15.249 ], 00:16:15.249 "product_name": "Malloc disk", 00:16:15.249 "block_size": 512, 00:16:15.249 "num_blocks": 65536, 00:16:15.249 "uuid": "a3c102f9-e06a-48e8-b7f7-04da20a22ae5", 00:16:15.249 "assigned_rate_limits": { 00:16:15.249 "rw_ios_per_sec": 0, 00:16:15.249 "rw_mbytes_per_sec": 0, 00:16:15.249 "r_mbytes_per_sec": 0, 00:16:15.249 "w_mbytes_per_sec": 0 00:16:15.249 }, 00:16:15.249 "claimed": true, 00:16:15.249 "claim_type": "exclusive_write", 00:16:15.249 "zoned": false, 00:16:15.249 "supported_io_types": { 00:16:15.249 "read": true, 00:16:15.249 "write": true, 00:16:15.249 "unmap": true, 00:16:15.250 "flush": true, 00:16:15.250 "reset": true, 00:16:15.250 "nvme_admin": false, 00:16:15.250 "nvme_io": false, 00:16:15.250 "nvme_io_md": false, 00:16:15.250 "write_zeroes": true, 00:16:15.250 "zcopy": true, 00:16:15.250 "get_zone_info": false, 00:16:15.250 "zone_management": false, 00:16:15.250 "zone_append": false, 00:16:15.250 "compare": false, 00:16:15.250 "compare_and_write": false, 00:16:15.250 "abort": true, 00:16:15.250 "seek_hole": false, 00:16:15.250 "seek_data": false, 00:16:15.250 "copy": true, 00:16:15.250 "nvme_iov_md": false 00:16:15.250 }, 00:16:15.250 "memory_domains": [ 00:16:15.250 { 00:16:15.250 "dma_device_id": "system", 00:16:15.250 "dma_device_type": 1 00:16:15.250 }, 00:16:15.250 { 00:16:15.250 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:15.250 "dma_device_type": 2 00:16:15.250 } 00:16:15.250 ], 00:16:15.250 "driver_specific": {} 00:16:15.250 } 00:16:15.250 ] 00:16:15.250 13:36:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.250 13:36:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:15.250 13:36:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:15.250 13:36:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:15.250 13:36:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:16:15.250 13:36:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:15.250 13:36:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:15.250 13:36:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:16:15.250 13:36:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:15.250 13:36:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:15.250 13:36:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:15.250 13:36:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:15.250 13:36:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:15.250 13:36:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:15.250 13:36:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:15.250 13:36:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:15.250 13:36:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.250 13:36:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:15.250 13:36:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.250 13:36:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:15.250 "name": "Existed_Raid", 00:16:15.250 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:15.250 "strip_size_kb": 64, 00:16:15.250 "state": "configuring", 00:16:15.250 "raid_level": "concat", 00:16:15.250 "superblock": false, 00:16:15.250 "num_base_bdevs": 4, 00:16:15.250 "num_base_bdevs_discovered": 3, 00:16:15.250 "num_base_bdevs_operational": 4, 00:16:15.250 "base_bdevs_list": [ 00:16:15.250 { 00:16:15.250 "name": "BaseBdev1", 00:16:15.250 "uuid": "429f9017-a198-4bef-a40e-29428f41e72b", 00:16:15.250 "is_configured": true, 00:16:15.250 "data_offset": 0, 00:16:15.250 "data_size": 65536 00:16:15.250 }, 00:16:15.250 { 00:16:15.250 "name": "BaseBdev2", 00:16:15.250 "uuid": "d42765e3-9227-4eeb-a6d4-a2da140d3cf7", 00:16:15.250 "is_configured": true, 00:16:15.250 "data_offset": 0, 00:16:15.250 "data_size": 65536 00:16:15.250 }, 00:16:15.250 { 00:16:15.250 "name": "BaseBdev3", 00:16:15.250 "uuid": "a3c102f9-e06a-48e8-b7f7-04da20a22ae5", 00:16:15.250 "is_configured": true, 00:16:15.250 "data_offset": 0, 00:16:15.250 "data_size": 65536 00:16:15.250 }, 00:16:15.250 { 00:16:15.250 "name": "BaseBdev4", 00:16:15.250 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:15.250 "is_configured": false, 00:16:15.250 "data_offset": 0, 00:16:15.250 "data_size": 0 00:16:15.250 } 00:16:15.250 ] 00:16:15.250 }' 00:16:15.250 13:36:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:15.250 13:36:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:15.508 13:36:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:16:15.508 13:36:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.508 13:36:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:15.766 [2024-11-20 13:36:15.001503] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:15.766 [2024-11-20 13:36:15.001571] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:16:15.766 [2024-11-20 13:36:15.001581] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:16:15.766 [2024-11-20 13:36:15.001881] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:16:15.766 [2024-11-20 13:36:15.002077] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:16:15.766 [2024-11-20 13:36:15.002101] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:16:15.766 [2024-11-20 13:36:15.002388] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:15.766 BaseBdev4 00:16:15.766 13:36:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.766 13:36:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:16:15.766 13:36:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:16:15.766 13:36:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:15.766 13:36:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:15.766 13:36:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:15.766 13:36:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:15.766 13:36:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:15.766 13:36:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.766 13:36:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:15.766 13:36:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.766 13:36:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:16:15.766 13:36:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.766 13:36:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:15.766 [ 00:16:15.766 { 00:16:15.766 "name": "BaseBdev4", 00:16:15.766 "aliases": [ 00:16:15.766 "bf39406a-f480-44c5-88f5-f28e37446588" 00:16:15.766 ], 00:16:15.766 "product_name": "Malloc disk", 00:16:15.766 "block_size": 512, 00:16:15.766 "num_blocks": 65536, 00:16:15.766 "uuid": "bf39406a-f480-44c5-88f5-f28e37446588", 00:16:15.766 "assigned_rate_limits": { 00:16:15.766 "rw_ios_per_sec": 0, 00:16:15.766 "rw_mbytes_per_sec": 0, 00:16:15.766 "r_mbytes_per_sec": 0, 00:16:15.766 "w_mbytes_per_sec": 0 00:16:15.766 }, 00:16:15.766 "claimed": true, 00:16:15.766 "claim_type": "exclusive_write", 00:16:15.766 "zoned": false, 00:16:15.766 "supported_io_types": { 00:16:15.766 "read": true, 00:16:15.766 "write": true, 00:16:15.766 "unmap": true, 00:16:15.766 "flush": true, 00:16:15.766 "reset": true, 00:16:15.766 "nvme_admin": false, 00:16:15.766 "nvme_io": false, 00:16:15.766 "nvme_io_md": false, 00:16:15.766 "write_zeroes": true, 00:16:15.766 "zcopy": true, 00:16:15.766 "get_zone_info": false, 00:16:15.766 "zone_management": false, 00:16:15.766 "zone_append": false, 00:16:15.766 "compare": false, 00:16:15.766 "compare_and_write": false, 00:16:15.766 "abort": true, 00:16:15.766 "seek_hole": false, 00:16:15.766 "seek_data": false, 00:16:15.766 "copy": true, 00:16:15.766 "nvme_iov_md": false 00:16:15.766 }, 00:16:15.766 "memory_domains": [ 00:16:15.766 { 00:16:15.766 "dma_device_id": "system", 00:16:15.766 "dma_device_type": 1 00:16:15.766 }, 00:16:15.766 { 00:16:15.766 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:15.766 "dma_device_type": 2 00:16:15.766 } 00:16:15.766 ], 00:16:15.766 "driver_specific": {} 00:16:15.766 } 00:16:15.767 ] 00:16:15.767 13:36:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.767 13:36:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:15.767 13:36:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:15.767 13:36:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:15.767 13:36:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:16:15.767 13:36:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:15.767 13:36:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:15.767 13:36:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:16:15.767 13:36:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:15.767 13:36:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:15.767 13:36:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:15.767 13:36:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:15.767 13:36:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:15.767 13:36:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:15.767 13:36:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:15.767 13:36:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.767 13:36:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:15.767 13:36:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:15.767 13:36:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.767 13:36:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:15.767 "name": "Existed_Raid", 00:16:15.767 "uuid": "1a172448-5553-40f0-bd4a-60504ea0db3f", 00:16:15.767 "strip_size_kb": 64, 00:16:15.767 "state": "online", 00:16:15.767 "raid_level": "concat", 00:16:15.767 "superblock": false, 00:16:15.767 "num_base_bdevs": 4, 00:16:15.767 "num_base_bdevs_discovered": 4, 00:16:15.767 "num_base_bdevs_operational": 4, 00:16:15.767 "base_bdevs_list": [ 00:16:15.767 { 00:16:15.767 "name": "BaseBdev1", 00:16:15.767 "uuid": "429f9017-a198-4bef-a40e-29428f41e72b", 00:16:15.767 "is_configured": true, 00:16:15.767 "data_offset": 0, 00:16:15.767 "data_size": 65536 00:16:15.767 }, 00:16:15.767 { 00:16:15.767 "name": "BaseBdev2", 00:16:15.767 "uuid": "d42765e3-9227-4eeb-a6d4-a2da140d3cf7", 00:16:15.767 "is_configured": true, 00:16:15.767 "data_offset": 0, 00:16:15.767 "data_size": 65536 00:16:15.767 }, 00:16:15.767 { 00:16:15.767 "name": "BaseBdev3", 00:16:15.767 "uuid": "a3c102f9-e06a-48e8-b7f7-04da20a22ae5", 00:16:15.767 "is_configured": true, 00:16:15.767 "data_offset": 0, 00:16:15.767 "data_size": 65536 00:16:15.767 }, 00:16:15.767 { 00:16:15.767 "name": "BaseBdev4", 00:16:15.767 "uuid": "bf39406a-f480-44c5-88f5-f28e37446588", 00:16:15.767 "is_configured": true, 00:16:15.767 "data_offset": 0, 00:16:15.767 "data_size": 65536 00:16:15.767 } 00:16:15.767 ] 00:16:15.767 }' 00:16:15.767 13:36:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:15.767 13:36:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:16.026 13:36:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:16:16.026 13:36:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:16.026 13:36:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:16.026 13:36:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:16.026 13:36:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:16:16.026 13:36:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:16.026 13:36:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:16.026 13:36:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.026 13:36:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:16.026 13:36:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:16.026 [2024-11-20 13:36:15.445329] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:16.026 13:36:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.026 13:36:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:16.026 "name": "Existed_Raid", 00:16:16.026 "aliases": [ 00:16:16.026 "1a172448-5553-40f0-bd4a-60504ea0db3f" 00:16:16.026 ], 00:16:16.026 "product_name": "Raid Volume", 00:16:16.026 "block_size": 512, 00:16:16.026 "num_blocks": 262144, 00:16:16.026 "uuid": "1a172448-5553-40f0-bd4a-60504ea0db3f", 00:16:16.026 "assigned_rate_limits": { 00:16:16.026 "rw_ios_per_sec": 0, 00:16:16.026 "rw_mbytes_per_sec": 0, 00:16:16.026 "r_mbytes_per_sec": 0, 00:16:16.026 "w_mbytes_per_sec": 0 00:16:16.026 }, 00:16:16.026 "claimed": false, 00:16:16.026 "zoned": false, 00:16:16.026 "supported_io_types": { 00:16:16.026 "read": true, 00:16:16.026 "write": true, 00:16:16.026 "unmap": true, 00:16:16.026 "flush": true, 00:16:16.026 "reset": true, 00:16:16.026 "nvme_admin": false, 00:16:16.026 "nvme_io": false, 00:16:16.026 "nvme_io_md": false, 00:16:16.026 "write_zeroes": true, 00:16:16.026 "zcopy": false, 00:16:16.026 "get_zone_info": false, 00:16:16.026 "zone_management": false, 00:16:16.026 "zone_append": false, 00:16:16.026 "compare": false, 00:16:16.026 "compare_and_write": false, 00:16:16.026 "abort": false, 00:16:16.026 "seek_hole": false, 00:16:16.026 "seek_data": false, 00:16:16.026 "copy": false, 00:16:16.026 "nvme_iov_md": false 00:16:16.026 }, 00:16:16.026 "memory_domains": [ 00:16:16.026 { 00:16:16.026 "dma_device_id": "system", 00:16:16.026 "dma_device_type": 1 00:16:16.026 }, 00:16:16.026 { 00:16:16.026 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:16.026 "dma_device_type": 2 00:16:16.026 }, 00:16:16.026 { 00:16:16.026 "dma_device_id": "system", 00:16:16.026 "dma_device_type": 1 00:16:16.026 }, 00:16:16.026 { 00:16:16.026 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:16.026 "dma_device_type": 2 00:16:16.026 }, 00:16:16.026 { 00:16:16.026 "dma_device_id": "system", 00:16:16.026 "dma_device_type": 1 00:16:16.026 }, 00:16:16.026 { 00:16:16.026 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:16.026 "dma_device_type": 2 00:16:16.026 }, 00:16:16.026 { 00:16:16.026 "dma_device_id": "system", 00:16:16.026 "dma_device_type": 1 00:16:16.026 }, 00:16:16.026 { 00:16:16.026 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:16.026 "dma_device_type": 2 00:16:16.026 } 00:16:16.026 ], 00:16:16.026 "driver_specific": { 00:16:16.026 "raid": { 00:16:16.026 "uuid": "1a172448-5553-40f0-bd4a-60504ea0db3f", 00:16:16.026 "strip_size_kb": 64, 00:16:16.026 "state": "online", 00:16:16.026 "raid_level": "concat", 00:16:16.026 "superblock": false, 00:16:16.026 "num_base_bdevs": 4, 00:16:16.026 "num_base_bdevs_discovered": 4, 00:16:16.026 "num_base_bdevs_operational": 4, 00:16:16.026 "base_bdevs_list": [ 00:16:16.026 { 00:16:16.026 "name": "BaseBdev1", 00:16:16.026 "uuid": "429f9017-a198-4bef-a40e-29428f41e72b", 00:16:16.026 "is_configured": true, 00:16:16.026 "data_offset": 0, 00:16:16.026 "data_size": 65536 00:16:16.026 }, 00:16:16.026 { 00:16:16.026 "name": "BaseBdev2", 00:16:16.026 "uuid": "d42765e3-9227-4eeb-a6d4-a2da140d3cf7", 00:16:16.026 "is_configured": true, 00:16:16.026 "data_offset": 0, 00:16:16.026 "data_size": 65536 00:16:16.026 }, 00:16:16.026 { 00:16:16.026 "name": "BaseBdev3", 00:16:16.026 "uuid": "a3c102f9-e06a-48e8-b7f7-04da20a22ae5", 00:16:16.026 "is_configured": true, 00:16:16.026 "data_offset": 0, 00:16:16.026 "data_size": 65536 00:16:16.026 }, 00:16:16.026 { 00:16:16.026 "name": "BaseBdev4", 00:16:16.026 "uuid": "bf39406a-f480-44c5-88f5-f28e37446588", 00:16:16.026 "is_configured": true, 00:16:16.026 "data_offset": 0, 00:16:16.026 "data_size": 65536 00:16:16.026 } 00:16:16.026 ] 00:16:16.026 } 00:16:16.026 } 00:16:16.026 }' 00:16:16.026 13:36:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:16.285 13:36:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:16:16.285 BaseBdev2 00:16:16.285 BaseBdev3 00:16:16.285 BaseBdev4' 00:16:16.285 13:36:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:16.285 13:36:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:16.285 13:36:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:16.285 13:36:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:16.285 13:36:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:16:16.285 13:36:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.285 13:36:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:16.285 13:36:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.285 13:36:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:16.285 13:36:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:16.285 13:36:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:16.285 13:36:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:16.285 13:36:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.285 13:36:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:16.285 13:36:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:16.285 13:36:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.285 13:36:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:16.285 13:36:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:16.285 13:36:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:16.285 13:36:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:16.285 13:36:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:16:16.285 13:36:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.285 13:36:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:16.285 13:36:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.285 13:36:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:16.285 13:36:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:16.285 13:36:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:16.285 13:36:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:16:16.285 13:36:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.285 13:36:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:16.285 13:36:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:16.285 13:36:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.285 13:36:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:16.285 13:36:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:16.285 13:36:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:16.285 13:36:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.285 13:36:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:16.285 [2024-11-20 13:36:15.752611] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:16.285 [2024-11-20 13:36:15.752650] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:16.285 [2024-11-20 13:36:15.752703] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:16.543 13:36:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.543 13:36:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:16:16.543 13:36:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:16:16.543 13:36:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:16.543 13:36:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:16:16.543 13:36:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:16:16.543 13:36:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:16:16.543 13:36:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:16.543 13:36:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:16:16.543 13:36:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:16:16.543 13:36:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:16.543 13:36:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:16.543 13:36:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:16.543 13:36:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:16.543 13:36:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:16.543 13:36:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:16.543 13:36:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:16.543 13:36:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:16.543 13:36:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.543 13:36:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:16.543 13:36:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.543 13:36:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:16.543 "name": "Existed_Raid", 00:16:16.543 "uuid": "1a172448-5553-40f0-bd4a-60504ea0db3f", 00:16:16.543 "strip_size_kb": 64, 00:16:16.543 "state": "offline", 00:16:16.543 "raid_level": "concat", 00:16:16.543 "superblock": false, 00:16:16.543 "num_base_bdevs": 4, 00:16:16.543 "num_base_bdevs_discovered": 3, 00:16:16.543 "num_base_bdevs_operational": 3, 00:16:16.543 "base_bdevs_list": [ 00:16:16.543 { 00:16:16.543 "name": null, 00:16:16.543 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:16.543 "is_configured": false, 00:16:16.543 "data_offset": 0, 00:16:16.543 "data_size": 65536 00:16:16.543 }, 00:16:16.543 { 00:16:16.543 "name": "BaseBdev2", 00:16:16.543 "uuid": "d42765e3-9227-4eeb-a6d4-a2da140d3cf7", 00:16:16.543 "is_configured": true, 00:16:16.543 "data_offset": 0, 00:16:16.543 "data_size": 65536 00:16:16.543 }, 00:16:16.543 { 00:16:16.543 "name": "BaseBdev3", 00:16:16.543 "uuid": "a3c102f9-e06a-48e8-b7f7-04da20a22ae5", 00:16:16.543 "is_configured": true, 00:16:16.543 "data_offset": 0, 00:16:16.543 "data_size": 65536 00:16:16.543 }, 00:16:16.543 { 00:16:16.543 "name": "BaseBdev4", 00:16:16.543 "uuid": "bf39406a-f480-44c5-88f5-f28e37446588", 00:16:16.543 "is_configured": true, 00:16:16.543 "data_offset": 0, 00:16:16.543 "data_size": 65536 00:16:16.543 } 00:16:16.543 ] 00:16:16.543 }' 00:16:16.543 13:36:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:16.543 13:36:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:16.801 13:36:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:16:16.801 13:36:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:17.060 13:36:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:17.060 13:36:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:17.061 13:36:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.061 13:36:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:17.061 13:36:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.061 13:36:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:17.061 13:36:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:17.061 13:36:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:16:17.061 13:36:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.061 13:36:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:17.061 [2024-11-20 13:36:16.334545] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:17.061 13:36:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.061 13:36:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:17.061 13:36:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:17.061 13:36:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:17.061 13:36:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.061 13:36:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:17.061 13:36:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:17.061 13:36:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.061 13:36:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:17.061 13:36:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:17.061 13:36:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:16:17.061 13:36:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.061 13:36:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:17.061 [2024-11-20 13:36:16.486788] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:17.319 13:36:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.319 13:36:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:17.320 13:36:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:17.320 13:36:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:17.320 13:36:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.320 13:36:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:17.320 13:36:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:17.320 13:36:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.320 13:36:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:17.320 13:36:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:17.320 13:36:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:16:17.320 13:36:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.320 13:36:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:17.320 [2024-11-20 13:36:16.630990] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:16:17.320 [2024-11-20 13:36:16.631046] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:16:17.320 13:36:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.320 13:36:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:17.320 13:36:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:17.320 13:36:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:17.320 13:36:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.320 13:36:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:17.320 13:36:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:16:17.320 13:36:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.320 13:36:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:16:17.320 13:36:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:16:17.320 13:36:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:16:17.320 13:36:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:16:17.320 13:36:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:17.320 13:36:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:16:17.320 13:36:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.320 13:36:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:17.579 BaseBdev2 00:16:17.579 13:36:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.579 13:36:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:16:17.579 13:36:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:16:17.579 13:36:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:17.579 13:36:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:17.579 13:36:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:17.579 13:36:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:17.579 13:36:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:17.579 13:36:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.579 13:36:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:17.579 13:36:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.579 13:36:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:17.579 13:36:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.579 13:36:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:17.579 [ 00:16:17.579 { 00:16:17.579 "name": "BaseBdev2", 00:16:17.579 "aliases": [ 00:16:17.579 "1bca19c4-f332-44f3-be8d-2fe72303f14a" 00:16:17.579 ], 00:16:17.579 "product_name": "Malloc disk", 00:16:17.579 "block_size": 512, 00:16:17.579 "num_blocks": 65536, 00:16:17.579 "uuid": "1bca19c4-f332-44f3-be8d-2fe72303f14a", 00:16:17.579 "assigned_rate_limits": { 00:16:17.579 "rw_ios_per_sec": 0, 00:16:17.579 "rw_mbytes_per_sec": 0, 00:16:17.579 "r_mbytes_per_sec": 0, 00:16:17.579 "w_mbytes_per_sec": 0 00:16:17.579 }, 00:16:17.579 "claimed": false, 00:16:17.579 "zoned": false, 00:16:17.579 "supported_io_types": { 00:16:17.579 "read": true, 00:16:17.579 "write": true, 00:16:17.579 "unmap": true, 00:16:17.579 "flush": true, 00:16:17.579 "reset": true, 00:16:17.579 "nvme_admin": false, 00:16:17.579 "nvme_io": false, 00:16:17.579 "nvme_io_md": false, 00:16:17.579 "write_zeroes": true, 00:16:17.579 "zcopy": true, 00:16:17.579 "get_zone_info": false, 00:16:17.579 "zone_management": false, 00:16:17.579 "zone_append": false, 00:16:17.579 "compare": false, 00:16:17.579 "compare_and_write": false, 00:16:17.579 "abort": true, 00:16:17.579 "seek_hole": false, 00:16:17.579 "seek_data": false, 00:16:17.579 "copy": true, 00:16:17.579 "nvme_iov_md": false 00:16:17.579 }, 00:16:17.579 "memory_domains": [ 00:16:17.579 { 00:16:17.579 "dma_device_id": "system", 00:16:17.579 "dma_device_type": 1 00:16:17.579 }, 00:16:17.579 { 00:16:17.579 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:17.579 "dma_device_type": 2 00:16:17.579 } 00:16:17.579 ], 00:16:17.579 "driver_specific": {} 00:16:17.579 } 00:16:17.579 ] 00:16:17.580 13:36:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.580 13:36:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:17.580 13:36:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:17.580 13:36:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:17.580 13:36:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:16:17.580 13:36:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.580 13:36:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:17.580 BaseBdev3 00:16:17.580 13:36:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.580 13:36:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:16:17.580 13:36:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:16:17.580 13:36:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:17.580 13:36:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:17.580 13:36:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:17.580 13:36:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:17.580 13:36:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:17.580 13:36:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.580 13:36:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:17.580 13:36:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.580 13:36:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:17.580 13:36:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.580 13:36:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:17.580 [ 00:16:17.580 { 00:16:17.580 "name": "BaseBdev3", 00:16:17.580 "aliases": [ 00:16:17.580 "20595317-ee1e-455b-9c9f-fa3bf40fd31b" 00:16:17.580 ], 00:16:17.580 "product_name": "Malloc disk", 00:16:17.580 "block_size": 512, 00:16:17.580 "num_blocks": 65536, 00:16:17.580 "uuid": "20595317-ee1e-455b-9c9f-fa3bf40fd31b", 00:16:17.580 "assigned_rate_limits": { 00:16:17.580 "rw_ios_per_sec": 0, 00:16:17.580 "rw_mbytes_per_sec": 0, 00:16:17.580 "r_mbytes_per_sec": 0, 00:16:17.580 "w_mbytes_per_sec": 0 00:16:17.580 }, 00:16:17.580 "claimed": false, 00:16:17.580 "zoned": false, 00:16:17.580 "supported_io_types": { 00:16:17.580 "read": true, 00:16:17.580 "write": true, 00:16:17.580 "unmap": true, 00:16:17.580 "flush": true, 00:16:17.580 "reset": true, 00:16:17.580 "nvme_admin": false, 00:16:17.580 "nvme_io": false, 00:16:17.580 "nvme_io_md": false, 00:16:17.580 "write_zeroes": true, 00:16:17.580 "zcopy": true, 00:16:17.580 "get_zone_info": false, 00:16:17.580 "zone_management": false, 00:16:17.580 "zone_append": false, 00:16:17.580 "compare": false, 00:16:17.580 "compare_and_write": false, 00:16:17.580 "abort": true, 00:16:17.580 "seek_hole": false, 00:16:17.580 "seek_data": false, 00:16:17.580 "copy": true, 00:16:17.580 "nvme_iov_md": false 00:16:17.580 }, 00:16:17.580 "memory_domains": [ 00:16:17.580 { 00:16:17.580 "dma_device_id": "system", 00:16:17.580 "dma_device_type": 1 00:16:17.580 }, 00:16:17.580 { 00:16:17.580 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:17.580 "dma_device_type": 2 00:16:17.580 } 00:16:17.580 ], 00:16:17.580 "driver_specific": {} 00:16:17.580 } 00:16:17.580 ] 00:16:17.580 13:36:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.580 13:36:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:17.580 13:36:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:17.580 13:36:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:17.580 13:36:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:16:17.580 13:36:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.580 13:36:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:17.580 BaseBdev4 00:16:17.580 13:36:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.580 13:36:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:16:17.580 13:36:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:16:17.580 13:36:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:17.580 13:36:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:17.580 13:36:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:17.580 13:36:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:17.580 13:36:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:17.580 13:36:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.580 13:36:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:17.580 13:36:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.580 13:36:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:16:17.580 13:36:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.580 13:36:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:17.580 [ 00:16:17.580 { 00:16:17.580 "name": "BaseBdev4", 00:16:17.580 "aliases": [ 00:16:17.580 "09be9df9-275c-45d3-8a18-29369a2fa8c1" 00:16:17.580 ], 00:16:17.580 "product_name": "Malloc disk", 00:16:17.580 "block_size": 512, 00:16:17.580 "num_blocks": 65536, 00:16:17.580 "uuid": "09be9df9-275c-45d3-8a18-29369a2fa8c1", 00:16:17.580 "assigned_rate_limits": { 00:16:17.580 "rw_ios_per_sec": 0, 00:16:17.580 "rw_mbytes_per_sec": 0, 00:16:17.580 "r_mbytes_per_sec": 0, 00:16:17.580 "w_mbytes_per_sec": 0 00:16:17.580 }, 00:16:17.580 "claimed": false, 00:16:17.580 "zoned": false, 00:16:17.580 "supported_io_types": { 00:16:17.580 "read": true, 00:16:17.580 "write": true, 00:16:17.580 "unmap": true, 00:16:17.580 "flush": true, 00:16:17.580 "reset": true, 00:16:17.580 "nvme_admin": false, 00:16:17.580 "nvme_io": false, 00:16:17.580 "nvme_io_md": false, 00:16:17.580 "write_zeroes": true, 00:16:17.580 "zcopy": true, 00:16:17.580 "get_zone_info": false, 00:16:17.580 "zone_management": false, 00:16:17.580 "zone_append": false, 00:16:17.580 "compare": false, 00:16:17.580 "compare_and_write": false, 00:16:17.580 "abort": true, 00:16:17.580 "seek_hole": false, 00:16:17.580 "seek_data": false, 00:16:17.580 "copy": true, 00:16:17.580 "nvme_iov_md": false 00:16:17.580 }, 00:16:17.580 "memory_domains": [ 00:16:17.580 { 00:16:17.580 "dma_device_id": "system", 00:16:17.580 "dma_device_type": 1 00:16:17.580 }, 00:16:17.580 { 00:16:17.580 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:17.580 "dma_device_type": 2 00:16:17.580 } 00:16:17.580 ], 00:16:17.580 "driver_specific": {} 00:16:17.580 } 00:16:17.580 ] 00:16:17.580 13:36:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.580 13:36:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:17.580 13:36:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:17.580 13:36:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:17.580 13:36:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:17.580 13:36:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.580 13:36:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:17.580 [2024-11-20 13:36:17.035704] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:17.580 [2024-11-20 13:36:17.035750] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:17.580 [2024-11-20 13:36:17.035775] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:17.580 [2024-11-20 13:36:17.037830] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:17.580 [2024-11-20 13:36:17.037885] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:17.580 13:36:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.580 13:36:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:16:17.580 13:36:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:17.580 13:36:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:17.580 13:36:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:16:17.580 13:36:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:17.580 13:36:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:17.580 13:36:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:17.580 13:36:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:17.580 13:36:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:17.580 13:36:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:17.580 13:36:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:17.580 13:36:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:17.580 13:36:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.580 13:36:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:17.839 13:36:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.839 13:36:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:17.839 "name": "Existed_Raid", 00:16:17.839 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:17.839 "strip_size_kb": 64, 00:16:17.839 "state": "configuring", 00:16:17.839 "raid_level": "concat", 00:16:17.839 "superblock": false, 00:16:17.839 "num_base_bdevs": 4, 00:16:17.839 "num_base_bdevs_discovered": 3, 00:16:17.839 "num_base_bdevs_operational": 4, 00:16:17.839 "base_bdevs_list": [ 00:16:17.839 { 00:16:17.839 "name": "BaseBdev1", 00:16:17.839 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:17.839 "is_configured": false, 00:16:17.839 "data_offset": 0, 00:16:17.839 "data_size": 0 00:16:17.839 }, 00:16:17.839 { 00:16:17.839 "name": "BaseBdev2", 00:16:17.839 "uuid": "1bca19c4-f332-44f3-be8d-2fe72303f14a", 00:16:17.839 "is_configured": true, 00:16:17.839 "data_offset": 0, 00:16:17.839 "data_size": 65536 00:16:17.839 }, 00:16:17.839 { 00:16:17.839 "name": "BaseBdev3", 00:16:17.839 "uuid": "20595317-ee1e-455b-9c9f-fa3bf40fd31b", 00:16:17.839 "is_configured": true, 00:16:17.839 "data_offset": 0, 00:16:17.839 "data_size": 65536 00:16:17.839 }, 00:16:17.839 { 00:16:17.839 "name": "BaseBdev4", 00:16:17.839 "uuid": "09be9df9-275c-45d3-8a18-29369a2fa8c1", 00:16:17.839 "is_configured": true, 00:16:17.839 "data_offset": 0, 00:16:17.839 "data_size": 65536 00:16:17.839 } 00:16:17.839 ] 00:16:17.839 }' 00:16:17.839 13:36:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:17.839 13:36:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:18.097 13:36:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:16:18.097 13:36:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.097 13:36:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:18.097 [2024-11-20 13:36:17.443212] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:18.097 13:36:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.097 13:36:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:16:18.097 13:36:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:18.097 13:36:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:18.097 13:36:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:16:18.097 13:36:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:18.097 13:36:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:18.097 13:36:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:18.097 13:36:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:18.097 13:36:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:18.097 13:36:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:18.097 13:36:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:18.097 13:36:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:18.097 13:36:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.097 13:36:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:18.097 13:36:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.097 13:36:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:18.097 "name": "Existed_Raid", 00:16:18.097 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:18.097 "strip_size_kb": 64, 00:16:18.097 "state": "configuring", 00:16:18.097 "raid_level": "concat", 00:16:18.097 "superblock": false, 00:16:18.097 "num_base_bdevs": 4, 00:16:18.097 "num_base_bdevs_discovered": 2, 00:16:18.097 "num_base_bdevs_operational": 4, 00:16:18.097 "base_bdevs_list": [ 00:16:18.097 { 00:16:18.097 "name": "BaseBdev1", 00:16:18.097 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:18.097 "is_configured": false, 00:16:18.097 "data_offset": 0, 00:16:18.097 "data_size": 0 00:16:18.097 }, 00:16:18.097 { 00:16:18.097 "name": null, 00:16:18.097 "uuid": "1bca19c4-f332-44f3-be8d-2fe72303f14a", 00:16:18.097 "is_configured": false, 00:16:18.097 "data_offset": 0, 00:16:18.097 "data_size": 65536 00:16:18.097 }, 00:16:18.097 { 00:16:18.097 "name": "BaseBdev3", 00:16:18.097 "uuid": "20595317-ee1e-455b-9c9f-fa3bf40fd31b", 00:16:18.097 "is_configured": true, 00:16:18.097 "data_offset": 0, 00:16:18.097 "data_size": 65536 00:16:18.097 }, 00:16:18.097 { 00:16:18.097 "name": "BaseBdev4", 00:16:18.097 "uuid": "09be9df9-275c-45d3-8a18-29369a2fa8c1", 00:16:18.097 "is_configured": true, 00:16:18.097 "data_offset": 0, 00:16:18.097 "data_size": 65536 00:16:18.097 } 00:16:18.097 ] 00:16:18.097 }' 00:16:18.097 13:36:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:18.097 13:36:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:18.663 13:36:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:18.663 13:36:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.663 13:36:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:18.663 13:36:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:18.663 13:36:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.663 13:36:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:16:18.663 13:36:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:16:18.663 13:36:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.663 13:36:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:18.663 [2024-11-20 13:36:17.939416] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:18.663 BaseBdev1 00:16:18.663 13:36:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.663 13:36:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:16:18.663 13:36:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:16:18.663 13:36:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:18.663 13:36:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:18.663 13:36:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:18.663 13:36:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:18.663 13:36:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:18.663 13:36:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.663 13:36:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:18.663 13:36:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.663 13:36:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:18.663 13:36:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.663 13:36:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:18.663 [ 00:16:18.663 { 00:16:18.663 "name": "BaseBdev1", 00:16:18.663 "aliases": [ 00:16:18.663 "5f9ae03e-7868-4685-9b45-e988026b62ed" 00:16:18.663 ], 00:16:18.663 "product_name": "Malloc disk", 00:16:18.663 "block_size": 512, 00:16:18.663 "num_blocks": 65536, 00:16:18.663 "uuid": "5f9ae03e-7868-4685-9b45-e988026b62ed", 00:16:18.663 "assigned_rate_limits": { 00:16:18.663 "rw_ios_per_sec": 0, 00:16:18.663 "rw_mbytes_per_sec": 0, 00:16:18.663 "r_mbytes_per_sec": 0, 00:16:18.663 "w_mbytes_per_sec": 0 00:16:18.663 }, 00:16:18.663 "claimed": true, 00:16:18.663 "claim_type": "exclusive_write", 00:16:18.663 "zoned": false, 00:16:18.663 "supported_io_types": { 00:16:18.663 "read": true, 00:16:18.663 "write": true, 00:16:18.663 "unmap": true, 00:16:18.663 "flush": true, 00:16:18.663 "reset": true, 00:16:18.663 "nvme_admin": false, 00:16:18.663 "nvme_io": false, 00:16:18.663 "nvme_io_md": false, 00:16:18.663 "write_zeroes": true, 00:16:18.663 "zcopy": true, 00:16:18.663 "get_zone_info": false, 00:16:18.664 "zone_management": false, 00:16:18.664 "zone_append": false, 00:16:18.664 "compare": false, 00:16:18.664 "compare_and_write": false, 00:16:18.664 "abort": true, 00:16:18.664 "seek_hole": false, 00:16:18.664 "seek_data": false, 00:16:18.664 "copy": true, 00:16:18.664 "nvme_iov_md": false 00:16:18.664 }, 00:16:18.664 "memory_domains": [ 00:16:18.664 { 00:16:18.664 "dma_device_id": "system", 00:16:18.664 "dma_device_type": 1 00:16:18.664 }, 00:16:18.664 { 00:16:18.664 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:18.664 "dma_device_type": 2 00:16:18.664 } 00:16:18.664 ], 00:16:18.664 "driver_specific": {} 00:16:18.664 } 00:16:18.664 ] 00:16:18.664 13:36:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.664 13:36:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:18.664 13:36:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:16:18.664 13:36:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:18.664 13:36:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:18.664 13:36:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:16:18.664 13:36:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:18.664 13:36:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:18.664 13:36:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:18.664 13:36:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:18.664 13:36:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:18.664 13:36:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:18.664 13:36:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:18.664 13:36:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.664 13:36:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:18.664 13:36:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:18.664 13:36:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.664 13:36:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:18.664 "name": "Existed_Raid", 00:16:18.664 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:18.664 "strip_size_kb": 64, 00:16:18.664 "state": "configuring", 00:16:18.664 "raid_level": "concat", 00:16:18.664 "superblock": false, 00:16:18.664 "num_base_bdevs": 4, 00:16:18.664 "num_base_bdevs_discovered": 3, 00:16:18.664 "num_base_bdevs_operational": 4, 00:16:18.664 "base_bdevs_list": [ 00:16:18.664 { 00:16:18.664 "name": "BaseBdev1", 00:16:18.664 "uuid": "5f9ae03e-7868-4685-9b45-e988026b62ed", 00:16:18.664 "is_configured": true, 00:16:18.664 "data_offset": 0, 00:16:18.664 "data_size": 65536 00:16:18.664 }, 00:16:18.664 { 00:16:18.664 "name": null, 00:16:18.664 "uuid": "1bca19c4-f332-44f3-be8d-2fe72303f14a", 00:16:18.664 "is_configured": false, 00:16:18.664 "data_offset": 0, 00:16:18.664 "data_size": 65536 00:16:18.664 }, 00:16:18.664 { 00:16:18.664 "name": "BaseBdev3", 00:16:18.664 "uuid": "20595317-ee1e-455b-9c9f-fa3bf40fd31b", 00:16:18.664 "is_configured": true, 00:16:18.664 "data_offset": 0, 00:16:18.664 "data_size": 65536 00:16:18.664 }, 00:16:18.664 { 00:16:18.664 "name": "BaseBdev4", 00:16:18.664 "uuid": "09be9df9-275c-45d3-8a18-29369a2fa8c1", 00:16:18.664 "is_configured": true, 00:16:18.664 "data_offset": 0, 00:16:18.664 "data_size": 65536 00:16:18.664 } 00:16:18.664 ] 00:16:18.664 }' 00:16:18.664 13:36:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:18.664 13:36:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:18.921 13:36:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:18.921 13:36:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.921 13:36:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:18.921 13:36:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:18.921 13:36:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.921 13:36:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:16:18.921 13:36:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:16:18.921 13:36:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.921 13:36:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.179 [2024-11-20 13:36:18.406895] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:19.179 13:36:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.179 13:36:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:16:19.179 13:36:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:19.179 13:36:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:19.179 13:36:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:16:19.179 13:36:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:19.180 13:36:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:19.180 13:36:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:19.180 13:36:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:19.180 13:36:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:19.180 13:36:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:19.180 13:36:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:19.180 13:36:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:19.180 13:36:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.180 13:36:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.180 13:36:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.180 13:36:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:19.180 "name": "Existed_Raid", 00:16:19.180 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:19.180 "strip_size_kb": 64, 00:16:19.180 "state": "configuring", 00:16:19.180 "raid_level": "concat", 00:16:19.180 "superblock": false, 00:16:19.180 "num_base_bdevs": 4, 00:16:19.180 "num_base_bdevs_discovered": 2, 00:16:19.180 "num_base_bdevs_operational": 4, 00:16:19.180 "base_bdevs_list": [ 00:16:19.180 { 00:16:19.180 "name": "BaseBdev1", 00:16:19.180 "uuid": "5f9ae03e-7868-4685-9b45-e988026b62ed", 00:16:19.180 "is_configured": true, 00:16:19.180 "data_offset": 0, 00:16:19.180 "data_size": 65536 00:16:19.180 }, 00:16:19.180 { 00:16:19.180 "name": null, 00:16:19.180 "uuid": "1bca19c4-f332-44f3-be8d-2fe72303f14a", 00:16:19.180 "is_configured": false, 00:16:19.180 "data_offset": 0, 00:16:19.180 "data_size": 65536 00:16:19.180 }, 00:16:19.180 { 00:16:19.180 "name": null, 00:16:19.180 "uuid": "20595317-ee1e-455b-9c9f-fa3bf40fd31b", 00:16:19.180 "is_configured": false, 00:16:19.180 "data_offset": 0, 00:16:19.180 "data_size": 65536 00:16:19.180 }, 00:16:19.180 { 00:16:19.180 "name": "BaseBdev4", 00:16:19.180 "uuid": "09be9df9-275c-45d3-8a18-29369a2fa8c1", 00:16:19.180 "is_configured": true, 00:16:19.180 "data_offset": 0, 00:16:19.180 "data_size": 65536 00:16:19.180 } 00:16:19.180 ] 00:16:19.180 }' 00:16:19.180 13:36:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:19.180 13:36:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.438 13:36:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:19.438 13:36:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:19.438 13:36:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.438 13:36:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.438 13:36:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.438 13:36:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:16:19.438 13:36:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:16:19.438 13:36:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.438 13:36:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.438 [2024-11-20 13:36:18.858446] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:19.438 13:36:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.438 13:36:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:16:19.438 13:36:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:19.439 13:36:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:19.439 13:36:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:16:19.439 13:36:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:19.439 13:36:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:19.439 13:36:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:19.439 13:36:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:19.439 13:36:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:19.439 13:36:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:19.439 13:36:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:19.439 13:36:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.439 13:36:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.439 13:36:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:19.439 13:36:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.439 13:36:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:19.439 "name": "Existed_Raid", 00:16:19.439 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:19.439 "strip_size_kb": 64, 00:16:19.439 "state": "configuring", 00:16:19.439 "raid_level": "concat", 00:16:19.439 "superblock": false, 00:16:19.439 "num_base_bdevs": 4, 00:16:19.439 "num_base_bdevs_discovered": 3, 00:16:19.439 "num_base_bdevs_operational": 4, 00:16:19.439 "base_bdevs_list": [ 00:16:19.439 { 00:16:19.439 "name": "BaseBdev1", 00:16:19.439 "uuid": "5f9ae03e-7868-4685-9b45-e988026b62ed", 00:16:19.439 "is_configured": true, 00:16:19.439 "data_offset": 0, 00:16:19.439 "data_size": 65536 00:16:19.439 }, 00:16:19.439 { 00:16:19.439 "name": null, 00:16:19.439 "uuid": "1bca19c4-f332-44f3-be8d-2fe72303f14a", 00:16:19.439 "is_configured": false, 00:16:19.439 "data_offset": 0, 00:16:19.439 "data_size": 65536 00:16:19.439 }, 00:16:19.439 { 00:16:19.439 "name": "BaseBdev3", 00:16:19.439 "uuid": "20595317-ee1e-455b-9c9f-fa3bf40fd31b", 00:16:19.439 "is_configured": true, 00:16:19.439 "data_offset": 0, 00:16:19.439 "data_size": 65536 00:16:19.439 }, 00:16:19.439 { 00:16:19.439 "name": "BaseBdev4", 00:16:19.439 "uuid": "09be9df9-275c-45d3-8a18-29369a2fa8c1", 00:16:19.439 "is_configured": true, 00:16:19.439 "data_offset": 0, 00:16:19.439 "data_size": 65536 00:16:19.439 } 00:16:19.439 ] 00:16:19.439 }' 00:16:19.439 13:36:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:19.439 13:36:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:20.083 13:36:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:20.083 13:36:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:20.083 13:36:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.083 13:36:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:20.083 13:36:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.083 13:36:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:16:20.083 13:36:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:20.083 13:36:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.083 13:36:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:20.083 [2024-11-20 13:36:19.290232] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:20.083 13:36:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.083 13:36:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:16:20.083 13:36:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:20.083 13:36:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:20.083 13:36:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:16:20.083 13:36:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:20.083 13:36:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:20.083 13:36:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:20.083 13:36:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:20.083 13:36:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:20.083 13:36:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:20.083 13:36:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:20.083 13:36:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:20.083 13:36:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.083 13:36:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:20.083 13:36:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.083 13:36:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:20.083 "name": "Existed_Raid", 00:16:20.083 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:20.083 "strip_size_kb": 64, 00:16:20.083 "state": "configuring", 00:16:20.083 "raid_level": "concat", 00:16:20.083 "superblock": false, 00:16:20.083 "num_base_bdevs": 4, 00:16:20.083 "num_base_bdevs_discovered": 2, 00:16:20.083 "num_base_bdevs_operational": 4, 00:16:20.083 "base_bdevs_list": [ 00:16:20.083 { 00:16:20.083 "name": null, 00:16:20.083 "uuid": "5f9ae03e-7868-4685-9b45-e988026b62ed", 00:16:20.083 "is_configured": false, 00:16:20.083 "data_offset": 0, 00:16:20.083 "data_size": 65536 00:16:20.083 }, 00:16:20.083 { 00:16:20.083 "name": null, 00:16:20.083 "uuid": "1bca19c4-f332-44f3-be8d-2fe72303f14a", 00:16:20.083 "is_configured": false, 00:16:20.083 "data_offset": 0, 00:16:20.083 "data_size": 65536 00:16:20.083 }, 00:16:20.083 { 00:16:20.083 "name": "BaseBdev3", 00:16:20.083 "uuid": "20595317-ee1e-455b-9c9f-fa3bf40fd31b", 00:16:20.083 "is_configured": true, 00:16:20.083 "data_offset": 0, 00:16:20.083 "data_size": 65536 00:16:20.083 }, 00:16:20.083 { 00:16:20.083 "name": "BaseBdev4", 00:16:20.083 "uuid": "09be9df9-275c-45d3-8a18-29369a2fa8c1", 00:16:20.083 "is_configured": true, 00:16:20.083 "data_offset": 0, 00:16:20.083 "data_size": 65536 00:16:20.083 } 00:16:20.083 ] 00:16:20.083 }' 00:16:20.083 13:36:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:20.083 13:36:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:20.341 13:36:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:20.341 13:36:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.341 13:36:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:20.341 13:36:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:20.341 13:36:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.599 13:36:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:16:20.599 13:36:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:16:20.599 13:36:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.599 13:36:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:20.599 [2024-11-20 13:36:19.835117] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:20.599 13:36:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.599 13:36:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:16:20.599 13:36:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:20.599 13:36:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:20.599 13:36:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:16:20.599 13:36:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:20.599 13:36:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:20.599 13:36:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:20.599 13:36:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:20.599 13:36:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:20.599 13:36:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:20.599 13:36:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:20.600 13:36:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:20.600 13:36:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.600 13:36:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:20.600 13:36:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.600 13:36:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:20.600 "name": "Existed_Raid", 00:16:20.600 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:20.600 "strip_size_kb": 64, 00:16:20.600 "state": "configuring", 00:16:20.600 "raid_level": "concat", 00:16:20.600 "superblock": false, 00:16:20.600 "num_base_bdevs": 4, 00:16:20.600 "num_base_bdevs_discovered": 3, 00:16:20.600 "num_base_bdevs_operational": 4, 00:16:20.600 "base_bdevs_list": [ 00:16:20.600 { 00:16:20.600 "name": null, 00:16:20.600 "uuid": "5f9ae03e-7868-4685-9b45-e988026b62ed", 00:16:20.600 "is_configured": false, 00:16:20.600 "data_offset": 0, 00:16:20.600 "data_size": 65536 00:16:20.600 }, 00:16:20.600 { 00:16:20.600 "name": "BaseBdev2", 00:16:20.600 "uuid": "1bca19c4-f332-44f3-be8d-2fe72303f14a", 00:16:20.600 "is_configured": true, 00:16:20.600 "data_offset": 0, 00:16:20.600 "data_size": 65536 00:16:20.600 }, 00:16:20.600 { 00:16:20.600 "name": "BaseBdev3", 00:16:20.600 "uuid": "20595317-ee1e-455b-9c9f-fa3bf40fd31b", 00:16:20.600 "is_configured": true, 00:16:20.600 "data_offset": 0, 00:16:20.600 "data_size": 65536 00:16:20.600 }, 00:16:20.600 { 00:16:20.600 "name": "BaseBdev4", 00:16:20.600 "uuid": "09be9df9-275c-45d3-8a18-29369a2fa8c1", 00:16:20.600 "is_configured": true, 00:16:20.600 "data_offset": 0, 00:16:20.600 "data_size": 65536 00:16:20.600 } 00:16:20.600 ] 00:16:20.600 }' 00:16:20.600 13:36:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:20.600 13:36:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:20.859 13:36:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:20.859 13:36:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.859 13:36:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:20.859 13:36:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:20.859 13:36:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.859 13:36:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:16:20.859 13:36:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:20.859 13:36:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:16:20.859 13:36:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.859 13:36:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.119 13:36:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.119 13:36:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 5f9ae03e-7868-4685-9b45-e988026b62ed 00:16:21.119 13:36:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.119 13:36:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.119 [2024-11-20 13:36:20.420823] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:16:21.119 [2024-11-20 13:36:20.420872] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:16:21.120 [2024-11-20 13:36:20.420882] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:16:21.120 [2024-11-20 13:36:20.421184] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:16:21.120 [2024-11-20 13:36:20.421337] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:16:21.120 [2024-11-20 13:36:20.421350] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:16:21.120 [2024-11-20 13:36:20.421589] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:21.120 NewBaseBdev 00:16:21.120 13:36:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.120 13:36:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:16:21.120 13:36:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:16:21.120 13:36:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:21.120 13:36:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:21.120 13:36:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:21.120 13:36:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:21.120 13:36:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:21.120 13:36:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.120 13:36:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.120 13:36:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.120 13:36:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:16:21.120 13:36:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.120 13:36:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.120 [ 00:16:21.120 { 00:16:21.120 "name": "NewBaseBdev", 00:16:21.120 "aliases": [ 00:16:21.120 "5f9ae03e-7868-4685-9b45-e988026b62ed" 00:16:21.120 ], 00:16:21.120 "product_name": "Malloc disk", 00:16:21.120 "block_size": 512, 00:16:21.120 "num_blocks": 65536, 00:16:21.120 "uuid": "5f9ae03e-7868-4685-9b45-e988026b62ed", 00:16:21.120 "assigned_rate_limits": { 00:16:21.120 "rw_ios_per_sec": 0, 00:16:21.120 "rw_mbytes_per_sec": 0, 00:16:21.120 "r_mbytes_per_sec": 0, 00:16:21.120 "w_mbytes_per_sec": 0 00:16:21.120 }, 00:16:21.120 "claimed": true, 00:16:21.120 "claim_type": "exclusive_write", 00:16:21.120 "zoned": false, 00:16:21.120 "supported_io_types": { 00:16:21.120 "read": true, 00:16:21.120 "write": true, 00:16:21.120 "unmap": true, 00:16:21.120 "flush": true, 00:16:21.120 "reset": true, 00:16:21.120 "nvme_admin": false, 00:16:21.120 "nvme_io": false, 00:16:21.120 "nvme_io_md": false, 00:16:21.120 "write_zeroes": true, 00:16:21.120 "zcopy": true, 00:16:21.120 "get_zone_info": false, 00:16:21.120 "zone_management": false, 00:16:21.120 "zone_append": false, 00:16:21.120 "compare": false, 00:16:21.120 "compare_and_write": false, 00:16:21.120 "abort": true, 00:16:21.120 "seek_hole": false, 00:16:21.120 "seek_data": false, 00:16:21.120 "copy": true, 00:16:21.120 "nvme_iov_md": false 00:16:21.120 }, 00:16:21.120 "memory_domains": [ 00:16:21.120 { 00:16:21.120 "dma_device_id": "system", 00:16:21.120 "dma_device_type": 1 00:16:21.120 }, 00:16:21.120 { 00:16:21.120 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:21.120 "dma_device_type": 2 00:16:21.120 } 00:16:21.120 ], 00:16:21.120 "driver_specific": {} 00:16:21.120 } 00:16:21.120 ] 00:16:21.120 13:36:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.120 13:36:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:21.120 13:36:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:16:21.120 13:36:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:21.120 13:36:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:21.120 13:36:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:16:21.120 13:36:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:21.120 13:36:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:21.120 13:36:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:21.120 13:36:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:21.120 13:36:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:21.120 13:36:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:21.120 13:36:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:21.120 13:36:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.120 13:36:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.120 13:36:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:21.120 13:36:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.120 13:36:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:21.120 "name": "Existed_Raid", 00:16:21.120 "uuid": "8a39eea8-4197-482f-9726-11c0d70f36c6", 00:16:21.120 "strip_size_kb": 64, 00:16:21.120 "state": "online", 00:16:21.120 "raid_level": "concat", 00:16:21.120 "superblock": false, 00:16:21.120 "num_base_bdevs": 4, 00:16:21.120 "num_base_bdevs_discovered": 4, 00:16:21.120 "num_base_bdevs_operational": 4, 00:16:21.120 "base_bdevs_list": [ 00:16:21.120 { 00:16:21.120 "name": "NewBaseBdev", 00:16:21.120 "uuid": "5f9ae03e-7868-4685-9b45-e988026b62ed", 00:16:21.120 "is_configured": true, 00:16:21.120 "data_offset": 0, 00:16:21.120 "data_size": 65536 00:16:21.120 }, 00:16:21.120 { 00:16:21.120 "name": "BaseBdev2", 00:16:21.120 "uuid": "1bca19c4-f332-44f3-be8d-2fe72303f14a", 00:16:21.120 "is_configured": true, 00:16:21.120 "data_offset": 0, 00:16:21.120 "data_size": 65536 00:16:21.120 }, 00:16:21.120 { 00:16:21.120 "name": "BaseBdev3", 00:16:21.120 "uuid": "20595317-ee1e-455b-9c9f-fa3bf40fd31b", 00:16:21.120 "is_configured": true, 00:16:21.120 "data_offset": 0, 00:16:21.120 "data_size": 65536 00:16:21.120 }, 00:16:21.120 { 00:16:21.120 "name": "BaseBdev4", 00:16:21.120 "uuid": "09be9df9-275c-45d3-8a18-29369a2fa8c1", 00:16:21.120 "is_configured": true, 00:16:21.120 "data_offset": 0, 00:16:21.120 "data_size": 65536 00:16:21.120 } 00:16:21.120 ] 00:16:21.120 }' 00:16:21.120 13:36:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:21.120 13:36:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.689 13:36:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:16:21.689 13:36:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:21.689 13:36:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:21.689 13:36:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:21.689 13:36:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:16:21.689 13:36:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:21.689 13:36:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:21.689 13:36:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.689 13:36:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:21.689 13:36:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.689 [2024-11-20 13:36:20.872655] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:21.689 13:36:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.689 13:36:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:21.689 "name": "Existed_Raid", 00:16:21.689 "aliases": [ 00:16:21.689 "8a39eea8-4197-482f-9726-11c0d70f36c6" 00:16:21.689 ], 00:16:21.689 "product_name": "Raid Volume", 00:16:21.689 "block_size": 512, 00:16:21.689 "num_blocks": 262144, 00:16:21.689 "uuid": "8a39eea8-4197-482f-9726-11c0d70f36c6", 00:16:21.689 "assigned_rate_limits": { 00:16:21.689 "rw_ios_per_sec": 0, 00:16:21.689 "rw_mbytes_per_sec": 0, 00:16:21.689 "r_mbytes_per_sec": 0, 00:16:21.689 "w_mbytes_per_sec": 0 00:16:21.689 }, 00:16:21.689 "claimed": false, 00:16:21.689 "zoned": false, 00:16:21.689 "supported_io_types": { 00:16:21.689 "read": true, 00:16:21.689 "write": true, 00:16:21.689 "unmap": true, 00:16:21.689 "flush": true, 00:16:21.689 "reset": true, 00:16:21.689 "nvme_admin": false, 00:16:21.689 "nvme_io": false, 00:16:21.689 "nvme_io_md": false, 00:16:21.689 "write_zeroes": true, 00:16:21.689 "zcopy": false, 00:16:21.689 "get_zone_info": false, 00:16:21.689 "zone_management": false, 00:16:21.689 "zone_append": false, 00:16:21.689 "compare": false, 00:16:21.689 "compare_and_write": false, 00:16:21.689 "abort": false, 00:16:21.689 "seek_hole": false, 00:16:21.689 "seek_data": false, 00:16:21.689 "copy": false, 00:16:21.689 "nvme_iov_md": false 00:16:21.689 }, 00:16:21.689 "memory_domains": [ 00:16:21.689 { 00:16:21.689 "dma_device_id": "system", 00:16:21.689 "dma_device_type": 1 00:16:21.689 }, 00:16:21.689 { 00:16:21.689 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:21.689 "dma_device_type": 2 00:16:21.689 }, 00:16:21.689 { 00:16:21.689 "dma_device_id": "system", 00:16:21.689 "dma_device_type": 1 00:16:21.689 }, 00:16:21.689 { 00:16:21.689 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:21.689 "dma_device_type": 2 00:16:21.689 }, 00:16:21.689 { 00:16:21.689 "dma_device_id": "system", 00:16:21.689 "dma_device_type": 1 00:16:21.689 }, 00:16:21.689 { 00:16:21.689 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:21.689 "dma_device_type": 2 00:16:21.689 }, 00:16:21.689 { 00:16:21.689 "dma_device_id": "system", 00:16:21.689 "dma_device_type": 1 00:16:21.689 }, 00:16:21.689 { 00:16:21.689 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:21.689 "dma_device_type": 2 00:16:21.689 } 00:16:21.689 ], 00:16:21.689 "driver_specific": { 00:16:21.689 "raid": { 00:16:21.689 "uuid": "8a39eea8-4197-482f-9726-11c0d70f36c6", 00:16:21.689 "strip_size_kb": 64, 00:16:21.689 "state": "online", 00:16:21.689 "raid_level": "concat", 00:16:21.689 "superblock": false, 00:16:21.689 "num_base_bdevs": 4, 00:16:21.689 "num_base_bdevs_discovered": 4, 00:16:21.689 "num_base_bdevs_operational": 4, 00:16:21.689 "base_bdevs_list": [ 00:16:21.689 { 00:16:21.689 "name": "NewBaseBdev", 00:16:21.689 "uuid": "5f9ae03e-7868-4685-9b45-e988026b62ed", 00:16:21.689 "is_configured": true, 00:16:21.689 "data_offset": 0, 00:16:21.689 "data_size": 65536 00:16:21.689 }, 00:16:21.689 { 00:16:21.689 "name": "BaseBdev2", 00:16:21.689 "uuid": "1bca19c4-f332-44f3-be8d-2fe72303f14a", 00:16:21.689 "is_configured": true, 00:16:21.689 "data_offset": 0, 00:16:21.689 "data_size": 65536 00:16:21.689 }, 00:16:21.689 { 00:16:21.689 "name": "BaseBdev3", 00:16:21.689 "uuid": "20595317-ee1e-455b-9c9f-fa3bf40fd31b", 00:16:21.689 "is_configured": true, 00:16:21.689 "data_offset": 0, 00:16:21.689 "data_size": 65536 00:16:21.689 }, 00:16:21.689 { 00:16:21.689 "name": "BaseBdev4", 00:16:21.689 "uuid": "09be9df9-275c-45d3-8a18-29369a2fa8c1", 00:16:21.689 "is_configured": true, 00:16:21.689 "data_offset": 0, 00:16:21.689 "data_size": 65536 00:16:21.689 } 00:16:21.689 ] 00:16:21.689 } 00:16:21.689 } 00:16:21.689 }' 00:16:21.689 13:36:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:21.689 13:36:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:16:21.689 BaseBdev2 00:16:21.689 BaseBdev3 00:16:21.689 BaseBdev4' 00:16:21.689 13:36:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:21.689 13:36:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:21.689 13:36:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:21.689 13:36:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:21.689 13:36:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:16:21.689 13:36:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.689 13:36:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.689 13:36:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.689 13:36:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:21.689 13:36:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:21.689 13:36:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:21.689 13:36:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:21.689 13:36:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:21.689 13:36:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.690 13:36:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.690 13:36:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.690 13:36:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:21.690 13:36:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:21.690 13:36:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:21.690 13:36:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:16:21.690 13:36:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.690 13:36:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:21.690 13:36:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.690 13:36:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.690 13:36:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:21.690 13:36:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:21.690 13:36:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:21.690 13:36:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:16:21.690 13:36:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:21.690 13:36:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.690 13:36:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.690 13:36:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.948 13:36:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:21.948 13:36:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:21.948 13:36:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:21.948 13:36:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.948 13:36:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.948 [2024-11-20 13:36:21.183855] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:21.948 [2024-11-20 13:36:21.184005] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:21.948 [2024-11-20 13:36:21.184139] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:21.948 [2024-11-20 13:36:21.184213] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:21.948 [2024-11-20 13:36:21.184226] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:16:21.948 13:36:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.948 13:36:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 71037 00:16:21.948 13:36:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 71037 ']' 00:16:21.948 13:36:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 71037 00:16:21.948 13:36:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:16:21.948 13:36:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:21.948 13:36:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71037 00:16:21.948 13:36:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:21.948 killing process with pid 71037 00:16:21.948 13:36:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:21.948 13:36:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71037' 00:16:21.948 13:36:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 71037 00:16:21.948 [2024-11-20 13:36:21.227279] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:21.948 13:36:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 71037 00:16:22.207 [2024-11-20 13:36:21.632330] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:23.585 13:36:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:16:23.585 00:16:23.585 real 0m11.170s 00:16:23.585 user 0m17.638s 00:16:23.585 sys 0m2.253s 00:16:23.585 ************************************ 00:16:23.585 END TEST raid_state_function_test 00:16:23.585 ************************************ 00:16:23.585 13:36:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:23.585 13:36:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:23.585 13:36:22 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 4 true 00:16:23.585 13:36:22 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:16:23.585 13:36:22 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:23.585 13:36:22 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:23.585 ************************************ 00:16:23.585 START TEST raid_state_function_test_sb 00:16:23.585 ************************************ 00:16:23.585 13:36:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 4 true 00:16:23.585 13:36:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:16:23.585 13:36:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:16:23.585 13:36:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:16:23.585 13:36:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:16:23.585 13:36:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:16:23.585 13:36:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:23.585 13:36:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:16:23.585 13:36:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:23.585 13:36:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:23.585 13:36:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:16:23.585 13:36:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:23.585 13:36:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:23.585 13:36:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:16:23.585 13:36:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:23.585 13:36:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:23.585 13:36:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:16:23.585 13:36:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:23.585 13:36:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:23.585 13:36:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:16:23.585 13:36:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:16:23.585 13:36:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:16:23.585 13:36:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:16:23.585 13:36:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:16:23.585 13:36:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:16:23.585 13:36:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:16:23.585 13:36:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:16:23.585 13:36:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:16:23.585 13:36:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:16:23.585 13:36:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:16:23.585 13:36:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=71707 00:16:23.585 13:36:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:16:23.585 13:36:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 71707' 00:16:23.585 Process raid pid: 71707 00:16:23.585 13:36:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 71707 00:16:23.585 13:36:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 71707 ']' 00:16:23.585 13:36:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:23.585 13:36:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:23.585 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:23.585 13:36:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:23.585 13:36:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:23.585 13:36:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:23.585 [2024-11-20 13:36:22.959814] Starting SPDK v25.01-pre git sha1 82b85d9ca / DPDK 24.03.0 initialization... 00:16:23.585 [2024-11-20 13:36:22.959937] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:23.844 [2024-11-20 13:36:23.144183] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:23.844 [2024-11-20 13:36:23.262832] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:24.102 [2024-11-20 13:36:23.481652] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:24.103 [2024-11-20 13:36:23.481714] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:24.671 13:36:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:24.671 13:36:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:16:24.671 13:36:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:24.671 13:36:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.671 13:36:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:24.671 [2024-11-20 13:36:23.862531] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:24.671 [2024-11-20 13:36:23.862589] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:24.671 [2024-11-20 13:36:23.862602] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:24.671 [2024-11-20 13:36:23.862615] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:24.671 [2024-11-20 13:36:23.862630] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:24.671 [2024-11-20 13:36:23.862643] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:24.671 [2024-11-20 13:36:23.862651] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:24.671 [2024-11-20 13:36:23.862663] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:24.671 13:36:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.671 13:36:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:16:24.671 13:36:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:24.671 13:36:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:24.671 13:36:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:16:24.671 13:36:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:24.671 13:36:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:24.671 13:36:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:24.671 13:36:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:24.671 13:36:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:24.671 13:36:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:24.671 13:36:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:24.671 13:36:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:24.671 13:36:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.671 13:36:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:24.671 13:36:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.671 13:36:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:24.671 "name": "Existed_Raid", 00:16:24.671 "uuid": "b8dbcd74-176d-411a-809e-d19888213d10", 00:16:24.671 "strip_size_kb": 64, 00:16:24.671 "state": "configuring", 00:16:24.671 "raid_level": "concat", 00:16:24.671 "superblock": true, 00:16:24.671 "num_base_bdevs": 4, 00:16:24.671 "num_base_bdevs_discovered": 0, 00:16:24.671 "num_base_bdevs_operational": 4, 00:16:24.671 "base_bdevs_list": [ 00:16:24.671 { 00:16:24.671 "name": "BaseBdev1", 00:16:24.671 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:24.671 "is_configured": false, 00:16:24.671 "data_offset": 0, 00:16:24.671 "data_size": 0 00:16:24.671 }, 00:16:24.671 { 00:16:24.671 "name": "BaseBdev2", 00:16:24.671 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:24.671 "is_configured": false, 00:16:24.671 "data_offset": 0, 00:16:24.671 "data_size": 0 00:16:24.671 }, 00:16:24.671 { 00:16:24.671 "name": "BaseBdev3", 00:16:24.671 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:24.671 "is_configured": false, 00:16:24.671 "data_offset": 0, 00:16:24.671 "data_size": 0 00:16:24.671 }, 00:16:24.671 { 00:16:24.671 "name": "BaseBdev4", 00:16:24.671 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:24.671 "is_configured": false, 00:16:24.671 "data_offset": 0, 00:16:24.671 "data_size": 0 00:16:24.671 } 00:16:24.671 ] 00:16:24.671 }' 00:16:24.671 13:36:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:24.671 13:36:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:24.931 13:36:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:24.931 13:36:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.931 13:36:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:24.931 [2024-11-20 13:36:24.258471] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:24.931 [2024-11-20 13:36:24.258693] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:16:24.931 13:36:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.931 13:36:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:24.931 13:36:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.931 13:36:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:24.931 [2024-11-20 13:36:24.270508] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:24.931 [2024-11-20 13:36:24.270558] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:24.931 [2024-11-20 13:36:24.270571] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:24.931 [2024-11-20 13:36:24.270591] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:24.931 [2024-11-20 13:36:24.270621] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:24.931 [2024-11-20 13:36:24.270637] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:24.931 [2024-11-20 13:36:24.270646] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:24.931 [2024-11-20 13:36:24.270659] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:24.931 13:36:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.931 13:36:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:16:24.931 13:36:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.931 13:36:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:24.931 [2024-11-20 13:36:24.321245] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:24.931 BaseBdev1 00:16:24.931 13:36:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.931 13:36:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:16:24.931 13:36:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:16:24.931 13:36:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:24.931 13:36:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:24.931 13:36:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:24.931 13:36:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:24.931 13:36:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:24.931 13:36:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.931 13:36:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:24.931 13:36:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.931 13:36:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:24.931 13:36:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.931 13:36:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:24.931 [ 00:16:24.931 { 00:16:24.931 "name": "BaseBdev1", 00:16:24.931 "aliases": [ 00:16:24.931 "16c44ef9-c703-4c2b-8d2e-f5f4482350d8" 00:16:24.931 ], 00:16:24.931 "product_name": "Malloc disk", 00:16:24.931 "block_size": 512, 00:16:24.931 "num_blocks": 65536, 00:16:24.931 "uuid": "16c44ef9-c703-4c2b-8d2e-f5f4482350d8", 00:16:24.931 "assigned_rate_limits": { 00:16:24.931 "rw_ios_per_sec": 0, 00:16:24.931 "rw_mbytes_per_sec": 0, 00:16:24.931 "r_mbytes_per_sec": 0, 00:16:24.931 "w_mbytes_per_sec": 0 00:16:24.931 }, 00:16:24.931 "claimed": true, 00:16:24.931 "claim_type": "exclusive_write", 00:16:24.931 "zoned": false, 00:16:24.931 "supported_io_types": { 00:16:24.931 "read": true, 00:16:24.931 "write": true, 00:16:24.931 "unmap": true, 00:16:24.931 "flush": true, 00:16:24.931 "reset": true, 00:16:24.931 "nvme_admin": false, 00:16:24.931 "nvme_io": false, 00:16:24.931 "nvme_io_md": false, 00:16:24.931 "write_zeroes": true, 00:16:24.931 "zcopy": true, 00:16:24.931 "get_zone_info": false, 00:16:24.931 "zone_management": false, 00:16:24.931 "zone_append": false, 00:16:24.931 "compare": false, 00:16:24.931 "compare_and_write": false, 00:16:24.931 "abort": true, 00:16:24.931 "seek_hole": false, 00:16:24.931 "seek_data": false, 00:16:24.931 "copy": true, 00:16:24.931 "nvme_iov_md": false 00:16:24.931 }, 00:16:24.931 "memory_domains": [ 00:16:24.931 { 00:16:24.931 "dma_device_id": "system", 00:16:24.931 "dma_device_type": 1 00:16:24.931 }, 00:16:24.931 { 00:16:24.931 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:24.931 "dma_device_type": 2 00:16:24.931 } 00:16:24.931 ], 00:16:24.931 "driver_specific": {} 00:16:24.931 } 00:16:24.931 ] 00:16:24.931 13:36:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.931 13:36:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:24.931 13:36:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:16:24.931 13:36:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:24.931 13:36:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:24.931 13:36:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:16:24.931 13:36:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:24.931 13:36:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:24.931 13:36:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:24.931 13:36:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:24.931 13:36:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:24.931 13:36:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:24.931 13:36:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:24.931 13:36:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:24.931 13:36:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.932 13:36:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:24.932 13:36:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.191 13:36:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:25.191 "name": "Existed_Raid", 00:16:25.191 "uuid": "49fbea2f-4aad-4560-ac23-26dd435532cb", 00:16:25.191 "strip_size_kb": 64, 00:16:25.191 "state": "configuring", 00:16:25.191 "raid_level": "concat", 00:16:25.191 "superblock": true, 00:16:25.191 "num_base_bdevs": 4, 00:16:25.191 "num_base_bdevs_discovered": 1, 00:16:25.191 "num_base_bdevs_operational": 4, 00:16:25.191 "base_bdevs_list": [ 00:16:25.191 { 00:16:25.191 "name": "BaseBdev1", 00:16:25.191 "uuid": "16c44ef9-c703-4c2b-8d2e-f5f4482350d8", 00:16:25.191 "is_configured": true, 00:16:25.191 "data_offset": 2048, 00:16:25.191 "data_size": 63488 00:16:25.191 }, 00:16:25.191 { 00:16:25.191 "name": "BaseBdev2", 00:16:25.191 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:25.191 "is_configured": false, 00:16:25.191 "data_offset": 0, 00:16:25.191 "data_size": 0 00:16:25.191 }, 00:16:25.191 { 00:16:25.191 "name": "BaseBdev3", 00:16:25.191 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:25.191 "is_configured": false, 00:16:25.191 "data_offset": 0, 00:16:25.191 "data_size": 0 00:16:25.191 }, 00:16:25.191 { 00:16:25.191 "name": "BaseBdev4", 00:16:25.191 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:25.191 "is_configured": false, 00:16:25.191 "data_offset": 0, 00:16:25.191 "data_size": 0 00:16:25.191 } 00:16:25.191 ] 00:16:25.191 }' 00:16:25.191 13:36:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:25.191 13:36:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:25.451 13:36:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:25.451 13:36:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.451 13:36:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:25.451 [2024-11-20 13:36:24.804627] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:25.451 [2024-11-20 13:36:24.804821] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:16:25.451 13:36:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.451 13:36:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:25.451 13:36:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.451 13:36:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:25.451 [2024-11-20 13:36:24.816682] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:25.451 [2024-11-20 13:36:24.818977] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:25.451 [2024-11-20 13:36:24.819143] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:25.451 [2024-11-20 13:36:24.819227] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:25.451 [2024-11-20 13:36:24.819276] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:25.451 [2024-11-20 13:36:24.819306] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:25.451 [2024-11-20 13:36:24.819465] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:25.451 13:36:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.451 13:36:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:16:25.451 13:36:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:25.451 13:36:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:16:25.451 13:36:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:25.451 13:36:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:25.451 13:36:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:16:25.451 13:36:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:25.451 13:36:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:25.451 13:36:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:25.451 13:36:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:25.451 13:36:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:25.451 13:36:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:25.451 13:36:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:25.451 13:36:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.451 13:36:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:25.451 13:36:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:25.451 13:36:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.451 13:36:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:25.451 "name": "Existed_Raid", 00:16:25.451 "uuid": "728f6e14-9508-4bfc-b813-f7f36c80ef81", 00:16:25.451 "strip_size_kb": 64, 00:16:25.451 "state": "configuring", 00:16:25.451 "raid_level": "concat", 00:16:25.451 "superblock": true, 00:16:25.451 "num_base_bdevs": 4, 00:16:25.451 "num_base_bdevs_discovered": 1, 00:16:25.451 "num_base_bdevs_operational": 4, 00:16:25.451 "base_bdevs_list": [ 00:16:25.451 { 00:16:25.451 "name": "BaseBdev1", 00:16:25.451 "uuid": "16c44ef9-c703-4c2b-8d2e-f5f4482350d8", 00:16:25.451 "is_configured": true, 00:16:25.451 "data_offset": 2048, 00:16:25.451 "data_size": 63488 00:16:25.451 }, 00:16:25.451 { 00:16:25.451 "name": "BaseBdev2", 00:16:25.451 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:25.451 "is_configured": false, 00:16:25.451 "data_offset": 0, 00:16:25.451 "data_size": 0 00:16:25.451 }, 00:16:25.451 { 00:16:25.451 "name": "BaseBdev3", 00:16:25.451 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:25.451 "is_configured": false, 00:16:25.451 "data_offset": 0, 00:16:25.451 "data_size": 0 00:16:25.451 }, 00:16:25.451 { 00:16:25.451 "name": "BaseBdev4", 00:16:25.451 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:25.451 "is_configured": false, 00:16:25.451 "data_offset": 0, 00:16:25.451 "data_size": 0 00:16:25.451 } 00:16:25.451 ] 00:16:25.451 }' 00:16:25.451 13:36:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:25.451 13:36:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:26.021 13:36:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:16:26.021 13:36:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.021 13:36:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:26.021 [2024-11-20 13:36:25.269316] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:26.021 BaseBdev2 00:16:26.021 13:36:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.021 13:36:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:16:26.021 13:36:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:16:26.021 13:36:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:26.021 13:36:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:26.021 13:36:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:26.021 13:36:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:26.021 13:36:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:26.021 13:36:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.021 13:36:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:26.021 13:36:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.021 13:36:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:26.021 13:36:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.021 13:36:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:26.021 [ 00:16:26.021 { 00:16:26.021 "name": "BaseBdev2", 00:16:26.021 "aliases": [ 00:16:26.021 "e1c3b759-e342-48f4-8963-803987b51366" 00:16:26.021 ], 00:16:26.021 "product_name": "Malloc disk", 00:16:26.021 "block_size": 512, 00:16:26.021 "num_blocks": 65536, 00:16:26.021 "uuid": "e1c3b759-e342-48f4-8963-803987b51366", 00:16:26.021 "assigned_rate_limits": { 00:16:26.021 "rw_ios_per_sec": 0, 00:16:26.021 "rw_mbytes_per_sec": 0, 00:16:26.021 "r_mbytes_per_sec": 0, 00:16:26.021 "w_mbytes_per_sec": 0 00:16:26.021 }, 00:16:26.021 "claimed": true, 00:16:26.021 "claim_type": "exclusive_write", 00:16:26.021 "zoned": false, 00:16:26.021 "supported_io_types": { 00:16:26.021 "read": true, 00:16:26.021 "write": true, 00:16:26.021 "unmap": true, 00:16:26.021 "flush": true, 00:16:26.021 "reset": true, 00:16:26.021 "nvme_admin": false, 00:16:26.021 "nvme_io": false, 00:16:26.021 "nvme_io_md": false, 00:16:26.021 "write_zeroes": true, 00:16:26.021 "zcopy": true, 00:16:26.021 "get_zone_info": false, 00:16:26.021 "zone_management": false, 00:16:26.021 "zone_append": false, 00:16:26.021 "compare": false, 00:16:26.021 "compare_and_write": false, 00:16:26.021 "abort": true, 00:16:26.021 "seek_hole": false, 00:16:26.021 "seek_data": false, 00:16:26.021 "copy": true, 00:16:26.021 "nvme_iov_md": false 00:16:26.021 }, 00:16:26.021 "memory_domains": [ 00:16:26.021 { 00:16:26.021 "dma_device_id": "system", 00:16:26.021 "dma_device_type": 1 00:16:26.021 }, 00:16:26.021 { 00:16:26.021 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:26.021 "dma_device_type": 2 00:16:26.021 } 00:16:26.021 ], 00:16:26.021 "driver_specific": {} 00:16:26.021 } 00:16:26.021 ] 00:16:26.021 13:36:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.021 13:36:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:26.021 13:36:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:26.021 13:36:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:26.021 13:36:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:16:26.021 13:36:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:26.021 13:36:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:26.021 13:36:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:16:26.021 13:36:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:26.021 13:36:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:26.021 13:36:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:26.021 13:36:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:26.021 13:36:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:26.021 13:36:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:26.021 13:36:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:26.021 13:36:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.021 13:36:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:26.021 13:36:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:26.021 13:36:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.021 13:36:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:26.021 "name": "Existed_Raid", 00:16:26.021 "uuid": "728f6e14-9508-4bfc-b813-f7f36c80ef81", 00:16:26.021 "strip_size_kb": 64, 00:16:26.021 "state": "configuring", 00:16:26.021 "raid_level": "concat", 00:16:26.021 "superblock": true, 00:16:26.021 "num_base_bdevs": 4, 00:16:26.021 "num_base_bdevs_discovered": 2, 00:16:26.021 "num_base_bdevs_operational": 4, 00:16:26.021 "base_bdevs_list": [ 00:16:26.021 { 00:16:26.021 "name": "BaseBdev1", 00:16:26.021 "uuid": "16c44ef9-c703-4c2b-8d2e-f5f4482350d8", 00:16:26.021 "is_configured": true, 00:16:26.021 "data_offset": 2048, 00:16:26.021 "data_size": 63488 00:16:26.021 }, 00:16:26.021 { 00:16:26.021 "name": "BaseBdev2", 00:16:26.021 "uuid": "e1c3b759-e342-48f4-8963-803987b51366", 00:16:26.021 "is_configured": true, 00:16:26.021 "data_offset": 2048, 00:16:26.021 "data_size": 63488 00:16:26.021 }, 00:16:26.021 { 00:16:26.021 "name": "BaseBdev3", 00:16:26.021 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:26.021 "is_configured": false, 00:16:26.021 "data_offset": 0, 00:16:26.021 "data_size": 0 00:16:26.021 }, 00:16:26.021 { 00:16:26.021 "name": "BaseBdev4", 00:16:26.021 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:26.021 "is_configured": false, 00:16:26.021 "data_offset": 0, 00:16:26.021 "data_size": 0 00:16:26.021 } 00:16:26.021 ] 00:16:26.021 }' 00:16:26.021 13:36:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:26.021 13:36:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:26.281 13:36:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:16:26.281 13:36:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.281 13:36:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:26.540 [2024-11-20 13:36:25.779787] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:26.540 BaseBdev3 00:16:26.540 13:36:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.540 13:36:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:16:26.540 13:36:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:16:26.540 13:36:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:26.540 13:36:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:26.540 13:36:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:26.540 13:36:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:26.540 13:36:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:26.540 13:36:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.540 13:36:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:26.540 13:36:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.540 13:36:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:26.540 13:36:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.540 13:36:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:26.540 [ 00:16:26.540 { 00:16:26.540 "name": "BaseBdev3", 00:16:26.540 "aliases": [ 00:16:26.540 "3d423c8f-76eb-4cf2-83dd-704b6b25a287" 00:16:26.540 ], 00:16:26.540 "product_name": "Malloc disk", 00:16:26.540 "block_size": 512, 00:16:26.540 "num_blocks": 65536, 00:16:26.540 "uuid": "3d423c8f-76eb-4cf2-83dd-704b6b25a287", 00:16:26.540 "assigned_rate_limits": { 00:16:26.540 "rw_ios_per_sec": 0, 00:16:26.540 "rw_mbytes_per_sec": 0, 00:16:26.540 "r_mbytes_per_sec": 0, 00:16:26.540 "w_mbytes_per_sec": 0 00:16:26.540 }, 00:16:26.540 "claimed": true, 00:16:26.540 "claim_type": "exclusive_write", 00:16:26.540 "zoned": false, 00:16:26.540 "supported_io_types": { 00:16:26.540 "read": true, 00:16:26.540 "write": true, 00:16:26.540 "unmap": true, 00:16:26.540 "flush": true, 00:16:26.540 "reset": true, 00:16:26.540 "nvme_admin": false, 00:16:26.540 "nvme_io": false, 00:16:26.540 "nvme_io_md": false, 00:16:26.540 "write_zeroes": true, 00:16:26.540 "zcopy": true, 00:16:26.540 "get_zone_info": false, 00:16:26.540 "zone_management": false, 00:16:26.540 "zone_append": false, 00:16:26.540 "compare": false, 00:16:26.540 "compare_and_write": false, 00:16:26.540 "abort": true, 00:16:26.540 "seek_hole": false, 00:16:26.540 "seek_data": false, 00:16:26.540 "copy": true, 00:16:26.540 "nvme_iov_md": false 00:16:26.540 }, 00:16:26.540 "memory_domains": [ 00:16:26.540 { 00:16:26.540 "dma_device_id": "system", 00:16:26.540 "dma_device_type": 1 00:16:26.540 }, 00:16:26.540 { 00:16:26.540 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:26.540 "dma_device_type": 2 00:16:26.540 } 00:16:26.540 ], 00:16:26.540 "driver_specific": {} 00:16:26.540 } 00:16:26.540 ] 00:16:26.540 13:36:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.540 13:36:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:26.540 13:36:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:26.540 13:36:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:26.540 13:36:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:16:26.540 13:36:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:26.540 13:36:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:26.540 13:36:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:16:26.540 13:36:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:26.540 13:36:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:26.540 13:36:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:26.540 13:36:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:26.540 13:36:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:26.540 13:36:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:26.540 13:36:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:26.540 13:36:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:26.540 13:36:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.540 13:36:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:26.540 13:36:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.540 13:36:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:26.540 "name": "Existed_Raid", 00:16:26.540 "uuid": "728f6e14-9508-4bfc-b813-f7f36c80ef81", 00:16:26.540 "strip_size_kb": 64, 00:16:26.540 "state": "configuring", 00:16:26.540 "raid_level": "concat", 00:16:26.540 "superblock": true, 00:16:26.540 "num_base_bdevs": 4, 00:16:26.540 "num_base_bdevs_discovered": 3, 00:16:26.540 "num_base_bdevs_operational": 4, 00:16:26.540 "base_bdevs_list": [ 00:16:26.540 { 00:16:26.540 "name": "BaseBdev1", 00:16:26.540 "uuid": "16c44ef9-c703-4c2b-8d2e-f5f4482350d8", 00:16:26.540 "is_configured": true, 00:16:26.540 "data_offset": 2048, 00:16:26.540 "data_size": 63488 00:16:26.540 }, 00:16:26.540 { 00:16:26.540 "name": "BaseBdev2", 00:16:26.540 "uuid": "e1c3b759-e342-48f4-8963-803987b51366", 00:16:26.540 "is_configured": true, 00:16:26.540 "data_offset": 2048, 00:16:26.540 "data_size": 63488 00:16:26.540 }, 00:16:26.540 { 00:16:26.540 "name": "BaseBdev3", 00:16:26.540 "uuid": "3d423c8f-76eb-4cf2-83dd-704b6b25a287", 00:16:26.540 "is_configured": true, 00:16:26.540 "data_offset": 2048, 00:16:26.540 "data_size": 63488 00:16:26.540 }, 00:16:26.540 { 00:16:26.540 "name": "BaseBdev4", 00:16:26.540 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:26.540 "is_configured": false, 00:16:26.540 "data_offset": 0, 00:16:26.540 "data_size": 0 00:16:26.540 } 00:16:26.540 ] 00:16:26.540 }' 00:16:26.540 13:36:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:26.540 13:36:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:26.800 13:36:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:16:26.800 13:36:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.800 13:36:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:26.800 [2024-11-20 13:36:26.260347] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:26.800 [2024-11-20 13:36:26.260606] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:16:26.800 [2024-11-20 13:36:26.260622] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:16:26.800 [2024-11-20 13:36:26.260901] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:16:26.800 BaseBdev4 00:16:26.800 [2024-11-20 13:36:26.261045] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:16:26.800 [2024-11-20 13:36:26.261072] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:16:26.800 [2024-11-20 13:36:26.261225] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:26.800 13:36:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.800 13:36:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:16:26.800 13:36:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:16:26.800 13:36:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:26.800 13:36:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:26.800 13:36:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:26.800 13:36:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:26.800 13:36:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:26.800 13:36:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.800 13:36:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:26.800 13:36:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.800 13:36:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:16:26.800 13:36:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.800 13:36:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:26.800 [ 00:16:26.800 { 00:16:26.800 "name": "BaseBdev4", 00:16:26.800 "aliases": [ 00:16:26.800 "b45a698d-1df8-4459-8029-4bab78fd5447" 00:16:26.800 ], 00:16:26.800 "product_name": "Malloc disk", 00:16:26.800 "block_size": 512, 00:16:26.800 "num_blocks": 65536, 00:16:26.800 "uuid": "b45a698d-1df8-4459-8029-4bab78fd5447", 00:16:26.800 "assigned_rate_limits": { 00:16:26.800 "rw_ios_per_sec": 0, 00:16:26.800 "rw_mbytes_per_sec": 0, 00:16:26.800 "r_mbytes_per_sec": 0, 00:16:26.800 "w_mbytes_per_sec": 0 00:16:26.800 }, 00:16:26.800 "claimed": true, 00:16:26.800 "claim_type": "exclusive_write", 00:16:26.800 "zoned": false, 00:16:26.800 "supported_io_types": { 00:16:26.800 "read": true, 00:16:26.800 "write": true, 00:16:26.800 "unmap": true, 00:16:26.800 "flush": true, 00:16:26.800 "reset": true, 00:16:26.800 "nvme_admin": false, 00:16:26.800 "nvme_io": false, 00:16:26.800 "nvme_io_md": false, 00:16:26.800 "write_zeroes": true, 00:16:26.800 "zcopy": true, 00:16:26.800 "get_zone_info": false, 00:16:26.800 "zone_management": false, 00:16:27.060 "zone_append": false, 00:16:27.060 "compare": false, 00:16:27.060 "compare_and_write": false, 00:16:27.060 "abort": true, 00:16:27.060 "seek_hole": false, 00:16:27.060 "seek_data": false, 00:16:27.060 "copy": true, 00:16:27.060 "nvme_iov_md": false 00:16:27.060 }, 00:16:27.060 "memory_domains": [ 00:16:27.060 { 00:16:27.060 "dma_device_id": "system", 00:16:27.060 "dma_device_type": 1 00:16:27.060 }, 00:16:27.060 { 00:16:27.060 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:27.060 "dma_device_type": 2 00:16:27.060 } 00:16:27.060 ], 00:16:27.060 "driver_specific": {} 00:16:27.060 } 00:16:27.060 ] 00:16:27.060 13:36:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.060 13:36:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:27.060 13:36:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:27.060 13:36:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:27.060 13:36:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:16:27.060 13:36:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:27.060 13:36:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:27.061 13:36:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:16:27.061 13:36:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:27.061 13:36:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:27.061 13:36:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:27.061 13:36:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:27.061 13:36:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:27.061 13:36:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:27.061 13:36:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:27.061 13:36:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.061 13:36:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:27.061 13:36:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:27.061 13:36:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.061 13:36:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:27.061 "name": "Existed_Raid", 00:16:27.061 "uuid": "728f6e14-9508-4bfc-b813-f7f36c80ef81", 00:16:27.061 "strip_size_kb": 64, 00:16:27.061 "state": "online", 00:16:27.061 "raid_level": "concat", 00:16:27.061 "superblock": true, 00:16:27.061 "num_base_bdevs": 4, 00:16:27.061 "num_base_bdevs_discovered": 4, 00:16:27.061 "num_base_bdevs_operational": 4, 00:16:27.061 "base_bdevs_list": [ 00:16:27.061 { 00:16:27.061 "name": "BaseBdev1", 00:16:27.061 "uuid": "16c44ef9-c703-4c2b-8d2e-f5f4482350d8", 00:16:27.061 "is_configured": true, 00:16:27.061 "data_offset": 2048, 00:16:27.061 "data_size": 63488 00:16:27.061 }, 00:16:27.061 { 00:16:27.061 "name": "BaseBdev2", 00:16:27.061 "uuid": "e1c3b759-e342-48f4-8963-803987b51366", 00:16:27.061 "is_configured": true, 00:16:27.061 "data_offset": 2048, 00:16:27.061 "data_size": 63488 00:16:27.061 }, 00:16:27.061 { 00:16:27.061 "name": "BaseBdev3", 00:16:27.061 "uuid": "3d423c8f-76eb-4cf2-83dd-704b6b25a287", 00:16:27.061 "is_configured": true, 00:16:27.061 "data_offset": 2048, 00:16:27.061 "data_size": 63488 00:16:27.061 }, 00:16:27.061 { 00:16:27.061 "name": "BaseBdev4", 00:16:27.061 "uuid": "b45a698d-1df8-4459-8029-4bab78fd5447", 00:16:27.061 "is_configured": true, 00:16:27.061 "data_offset": 2048, 00:16:27.061 "data_size": 63488 00:16:27.061 } 00:16:27.061 ] 00:16:27.061 }' 00:16:27.061 13:36:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:27.061 13:36:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:27.320 13:36:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:16:27.320 13:36:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:27.320 13:36:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:27.320 13:36:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:27.320 13:36:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:16:27.320 13:36:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:27.320 13:36:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:27.320 13:36:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:27.320 13:36:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.320 13:36:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:27.320 [2024-11-20 13:36:26.712192] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:27.320 13:36:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.320 13:36:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:27.320 "name": "Existed_Raid", 00:16:27.320 "aliases": [ 00:16:27.320 "728f6e14-9508-4bfc-b813-f7f36c80ef81" 00:16:27.320 ], 00:16:27.320 "product_name": "Raid Volume", 00:16:27.320 "block_size": 512, 00:16:27.320 "num_blocks": 253952, 00:16:27.320 "uuid": "728f6e14-9508-4bfc-b813-f7f36c80ef81", 00:16:27.320 "assigned_rate_limits": { 00:16:27.320 "rw_ios_per_sec": 0, 00:16:27.320 "rw_mbytes_per_sec": 0, 00:16:27.320 "r_mbytes_per_sec": 0, 00:16:27.320 "w_mbytes_per_sec": 0 00:16:27.320 }, 00:16:27.320 "claimed": false, 00:16:27.320 "zoned": false, 00:16:27.320 "supported_io_types": { 00:16:27.320 "read": true, 00:16:27.320 "write": true, 00:16:27.320 "unmap": true, 00:16:27.321 "flush": true, 00:16:27.321 "reset": true, 00:16:27.321 "nvme_admin": false, 00:16:27.321 "nvme_io": false, 00:16:27.321 "nvme_io_md": false, 00:16:27.321 "write_zeroes": true, 00:16:27.321 "zcopy": false, 00:16:27.321 "get_zone_info": false, 00:16:27.321 "zone_management": false, 00:16:27.321 "zone_append": false, 00:16:27.321 "compare": false, 00:16:27.321 "compare_and_write": false, 00:16:27.321 "abort": false, 00:16:27.321 "seek_hole": false, 00:16:27.321 "seek_data": false, 00:16:27.321 "copy": false, 00:16:27.321 "nvme_iov_md": false 00:16:27.321 }, 00:16:27.321 "memory_domains": [ 00:16:27.321 { 00:16:27.321 "dma_device_id": "system", 00:16:27.321 "dma_device_type": 1 00:16:27.321 }, 00:16:27.321 { 00:16:27.321 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:27.321 "dma_device_type": 2 00:16:27.321 }, 00:16:27.321 { 00:16:27.321 "dma_device_id": "system", 00:16:27.321 "dma_device_type": 1 00:16:27.321 }, 00:16:27.321 { 00:16:27.321 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:27.321 "dma_device_type": 2 00:16:27.321 }, 00:16:27.321 { 00:16:27.321 "dma_device_id": "system", 00:16:27.321 "dma_device_type": 1 00:16:27.321 }, 00:16:27.321 { 00:16:27.321 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:27.321 "dma_device_type": 2 00:16:27.321 }, 00:16:27.321 { 00:16:27.321 "dma_device_id": "system", 00:16:27.321 "dma_device_type": 1 00:16:27.321 }, 00:16:27.321 { 00:16:27.321 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:27.321 "dma_device_type": 2 00:16:27.321 } 00:16:27.321 ], 00:16:27.321 "driver_specific": { 00:16:27.321 "raid": { 00:16:27.321 "uuid": "728f6e14-9508-4bfc-b813-f7f36c80ef81", 00:16:27.321 "strip_size_kb": 64, 00:16:27.321 "state": "online", 00:16:27.321 "raid_level": "concat", 00:16:27.321 "superblock": true, 00:16:27.321 "num_base_bdevs": 4, 00:16:27.321 "num_base_bdevs_discovered": 4, 00:16:27.321 "num_base_bdevs_operational": 4, 00:16:27.321 "base_bdevs_list": [ 00:16:27.321 { 00:16:27.321 "name": "BaseBdev1", 00:16:27.321 "uuid": "16c44ef9-c703-4c2b-8d2e-f5f4482350d8", 00:16:27.321 "is_configured": true, 00:16:27.321 "data_offset": 2048, 00:16:27.321 "data_size": 63488 00:16:27.321 }, 00:16:27.321 { 00:16:27.321 "name": "BaseBdev2", 00:16:27.321 "uuid": "e1c3b759-e342-48f4-8963-803987b51366", 00:16:27.321 "is_configured": true, 00:16:27.321 "data_offset": 2048, 00:16:27.321 "data_size": 63488 00:16:27.321 }, 00:16:27.321 { 00:16:27.321 "name": "BaseBdev3", 00:16:27.321 "uuid": "3d423c8f-76eb-4cf2-83dd-704b6b25a287", 00:16:27.321 "is_configured": true, 00:16:27.321 "data_offset": 2048, 00:16:27.321 "data_size": 63488 00:16:27.321 }, 00:16:27.321 { 00:16:27.321 "name": "BaseBdev4", 00:16:27.321 "uuid": "b45a698d-1df8-4459-8029-4bab78fd5447", 00:16:27.321 "is_configured": true, 00:16:27.321 "data_offset": 2048, 00:16:27.321 "data_size": 63488 00:16:27.321 } 00:16:27.321 ] 00:16:27.321 } 00:16:27.321 } 00:16:27.321 }' 00:16:27.321 13:36:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:27.321 13:36:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:16:27.321 BaseBdev2 00:16:27.321 BaseBdev3 00:16:27.321 BaseBdev4' 00:16:27.321 13:36:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:27.580 13:36:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:27.580 13:36:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:27.580 13:36:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:16:27.580 13:36:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.580 13:36:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:27.580 13:36:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:27.580 13:36:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.580 13:36:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:27.580 13:36:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:27.580 13:36:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:27.580 13:36:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:27.580 13:36:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.580 13:36:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:27.580 13:36:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:27.581 13:36:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.581 13:36:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:27.581 13:36:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:27.581 13:36:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:27.581 13:36:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:16:27.581 13:36:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:27.581 13:36:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.581 13:36:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:27.581 13:36:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.581 13:36:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:27.581 13:36:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:27.581 13:36:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:27.581 13:36:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:27.581 13:36:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:16:27.581 13:36:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.581 13:36:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:27.581 13:36:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.581 13:36:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:27.581 13:36:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:27.581 13:36:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:27.581 13:36:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.581 13:36:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:27.581 [2024-11-20 13:36:27.039454] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:27.581 [2024-11-20 13:36:27.039488] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:27.581 [2024-11-20 13:36:27.039541] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:27.840 13:36:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.840 13:36:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:16:27.840 13:36:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:16:27.840 13:36:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:27.840 13:36:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:16:27.840 13:36:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:16:27.840 13:36:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:16:27.840 13:36:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:27.840 13:36:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:16:27.840 13:36:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:16:27.840 13:36:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:27.840 13:36:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:27.840 13:36:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:27.840 13:36:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:27.840 13:36:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:27.840 13:36:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:27.840 13:36:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:27.840 13:36:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:27.840 13:36:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.840 13:36:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:27.840 13:36:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.840 13:36:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:27.840 "name": "Existed_Raid", 00:16:27.840 "uuid": "728f6e14-9508-4bfc-b813-f7f36c80ef81", 00:16:27.840 "strip_size_kb": 64, 00:16:27.840 "state": "offline", 00:16:27.840 "raid_level": "concat", 00:16:27.840 "superblock": true, 00:16:27.840 "num_base_bdevs": 4, 00:16:27.840 "num_base_bdevs_discovered": 3, 00:16:27.840 "num_base_bdevs_operational": 3, 00:16:27.840 "base_bdevs_list": [ 00:16:27.840 { 00:16:27.840 "name": null, 00:16:27.840 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:27.840 "is_configured": false, 00:16:27.840 "data_offset": 0, 00:16:27.840 "data_size": 63488 00:16:27.840 }, 00:16:27.840 { 00:16:27.840 "name": "BaseBdev2", 00:16:27.840 "uuid": "e1c3b759-e342-48f4-8963-803987b51366", 00:16:27.840 "is_configured": true, 00:16:27.840 "data_offset": 2048, 00:16:27.840 "data_size": 63488 00:16:27.840 }, 00:16:27.840 { 00:16:27.840 "name": "BaseBdev3", 00:16:27.840 "uuid": "3d423c8f-76eb-4cf2-83dd-704b6b25a287", 00:16:27.840 "is_configured": true, 00:16:27.840 "data_offset": 2048, 00:16:27.840 "data_size": 63488 00:16:27.840 }, 00:16:27.840 { 00:16:27.840 "name": "BaseBdev4", 00:16:27.840 "uuid": "b45a698d-1df8-4459-8029-4bab78fd5447", 00:16:27.840 "is_configured": true, 00:16:27.840 "data_offset": 2048, 00:16:27.840 "data_size": 63488 00:16:27.840 } 00:16:27.840 ] 00:16:27.840 }' 00:16:27.840 13:36:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:27.840 13:36:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:28.099 13:36:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:16:28.099 13:36:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:28.099 13:36:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:28.099 13:36:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.099 13:36:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:28.099 13:36:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:28.358 13:36:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.358 13:36:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:28.358 13:36:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:28.358 13:36:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:16:28.358 13:36:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.358 13:36:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:28.358 [2024-11-20 13:36:27.595990] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:28.358 13:36:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.358 13:36:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:28.358 13:36:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:28.358 13:36:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:28.358 13:36:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.358 13:36:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:28.358 13:36:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:28.358 13:36:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.358 13:36:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:28.358 13:36:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:28.358 13:36:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:16:28.358 13:36:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.358 13:36:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:28.358 [2024-11-20 13:36:27.747508] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:28.617 13:36:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.617 13:36:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:28.617 13:36:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:28.617 13:36:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:28.617 13:36:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.617 13:36:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:28.617 13:36:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:28.617 13:36:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.617 13:36:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:28.617 13:36:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:28.617 13:36:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:16:28.617 13:36:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.617 13:36:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:28.618 [2024-11-20 13:36:27.898493] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:16:28.618 [2024-11-20 13:36:27.898541] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:16:28.618 13:36:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.618 13:36:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:28.618 13:36:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:28.618 13:36:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:28.618 13:36:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.618 13:36:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:16:28.618 13:36:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:28.618 13:36:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.618 13:36:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:16:28.618 13:36:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:16:28.618 13:36:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:16:28.618 13:36:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:16:28.618 13:36:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:28.618 13:36:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:16:28.618 13:36:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.618 13:36:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:28.618 BaseBdev2 00:16:28.618 13:36:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.618 13:36:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:16:28.618 13:36:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:16:28.618 13:36:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:28.618 13:36:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:28.618 13:36:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:28.618 13:36:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:28.618 13:36:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:28.618 13:36:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.618 13:36:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:28.618 13:36:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.618 13:36:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:28.618 13:36:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.618 13:36:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:28.877 [ 00:16:28.877 { 00:16:28.877 "name": "BaseBdev2", 00:16:28.877 "aliases": [ 00:16:28.877 "082bc4e8-7547-4e5a-9304-dfec5c23cbfa" 00:16:28.877 ], 00:16:28.877 "product_name": "Malloc disk", 00:16:28.877 "block_size": 512, 00:16:28.877 "num_blocks": 65536, 00:16:28.877 "uuid": "082bc4e8-7547-4e5a-9304-dfec5c23cbfa", 00:16:28.877 "assigned_rate_limits": { 00:16:28.877 "rw_ios_per_sec": 0, 00:16:28.877 "rw_mbytes_per_sec": 0, 00:16:28.877 "r_mbytes_per_sec": 0, 00:16:28.877 "w_mbytes_per_sec": 0 00:16:28.877 }, 00:16:28.877 "claimed": false, 00:16:28.877 "zoned": false, 00:16:28.877 "supported_io_types": { 00:16:28.877 "read": true, 00:16:28.877 "write": true, 00:16:28.877 "unmap": true, 00:16:28.877 "flush": true, 00:16:28.877 "reset": true, 00:16:28.877 "nvme_admin": false, 00:16:28.877 "nvme_io": false, 00:16:28.877 "nvme_io_md": false, 00:16:28.877 "write_zeroes": true, 00:16:28.877 "zcopy": true, 00:16:28.877 "get_zone_info": false, 00:16:28.877 "zone_management": false, 00:16:28.877 "zone_append": false, 00:16:28.877 "compare": false, 00:16:28.877 "compare_and_write": false, 00:16:28.877 "abort": true, 00:16:28.877 "seek_hole": false, 00:16:28.877 "seek_data": false, 00:16:28.877 "copy": true, 00:16:28.877 "nvme_iov_md": false 00:16:28.877 }, 00:16:28.877 "memory_domains": [ 00:16:28.877 { 00:16:28.877 "dma_device_id": "system", 00:16:28.877 "dma_device_type": 1 00:16:28.877 }, 00:16:28.877 { 00:16:28.877 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:28.877 "dma_device_type": 2 00:16:28.877 } 00:16:28.877 ], 00:16:28.877 "driver_specific": {} 00:16:28.877 } 00:16:28.877 ] 00:16:28.877 13:36:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.877 13:36:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:28.877 13:36:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:28.877 13:36:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:28.877 13:36:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:16:28.877 13:36:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.877 13:36:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:28.877 BaseBdev3 00:16:28.877 13:36:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.877 13:36:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:16:28.877 13:36:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:16:28.877 13:36:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:28.877 13:36:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:28.877 13:36:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:28.877 13:36:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:28.877 13:36:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:28.877 13:36:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.877 13:36:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:28.877 13:36:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.877 13:36:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:28.877 13:36:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.877 13:36:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:28.877 [ 00:16:28.877 { 00:16:28.877 "name": "BaseBdev3", 00:16:28.877 "aliases": [ 00:16:28.877 "7109c1d7-d0c8-4146-9d18-bbabf7840694" 00:16:28.877 ], 00:16:28.877 "product_name": "Malloc disk", 00:16:28.877 "block_size": 512, 00:16:28.877 "num_blocks": 65536, 00:16:28.877 "uuid": "7109c1d7-d0c8-4146-9d18-bbabf7840694", 00:16:28.877 "assigned_rate_limits": { 00:16:28.877 "rw_ios_per_sec": 0, 00:16:28.877 "rw_mbytes_per_sec": 0, 00:16:28.877 "r_mbytes_per_sec": 0, 00:16:28.877 "w_mbytes_per_sec": 0 00:16:28.878 }, 00:16:28.878 "claimed": false, 00:16:28.878 "zoned": false, 00:16:28.878 "supported_io_types": { 00:16:28.878 "read": true, 00:16:28.878 "write": true, 00:16:28.878 "unmap": true, 00:16:28.878 "flush": true, 00:16:28.878 "reset": true, 00:16:28.878 "nvme_admin": false, 00:16:28.878 "nvme_io": false, 00:16:28.878 "nvme_io_md": false, 00:16:28.878 "write_zeroes": true, 00:16:28.878 "zcopy": true, 00:16:28.878 "get_zone_info": false, 00:16:28.878 "zone_management": false, 00:16:28.878 "zone_append": false, 00:16:28.878 "compare": false, 00:16:28.878 "compare_and_write": false, 00:16:28.878 "abort": true, 00:16:28.878 "seek_hole": false, 00:16:28.878 "seek_data": false, 00:16:28.878 "copy": true, 00:16:28.878 "nvme_iov_md": false 00:16:28.878 }, 00:16:28.878 "memory_domains": [ 00:16:28.878 { 00:16:28.878 "dma_device_id": "system", 00:16:28.878 "dma_device_type": 1 00:16:28.878 }, 00:16:28.878 { 00:16:28.878 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:28.878 "dma_device_type": 2 00:16:28.878 } 00:16:28.878 ], 00:16:28.878 "driver_specific": {} 00:16:28.878 } 00:16:28.878 ] 00:16:28.878 13:36:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.878 13:36:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:28.878 13:36:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:28.878 13:36:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:28.878 13:36:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:16:28.878 13:36:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.878 13:36:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:28.878 BaseBdev4 00:16:28.878 13:36:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.878 13:36:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:16:28.878 13:36:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:16:28.878 13:36:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:28.878 13:36:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:28.878 13:36:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:28.878 13:36:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:28.878 13:36:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:28.878 13:36:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.878 13:36:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:28.878 13:36:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.878 13:36:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:16:28.878 13:36:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.878 13:36:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:28.878 [ 00:16:28.878 { 00:16:28.878 "name": "BaseBdev4", 00:16:28.878 "aliases": [ 00:16:28.878 "d416acd8-9b35-4910-a676-53c9e3bc46aa" 00:16:28.878 ], 00:16:28.878 "product_name": "Malloc disk", 00:16:28.878 "block_size": 512, 00:16:28.878 "num_blocks": 65536, 00:16:28.878 "uuid": "d416acd8-9b35-4910-a676-53c9e3bc46aa", 00:16:28.878 "assigned_rate_limits": { 00:16:28.878 "rw_ios_per_sec": 0, 00:16:28.878 "rw_mbytes_per_sec": 0, 00:16:28.878 "r_mbytes_per_sec": 0, 00:16:28.878 "w_mbytes_per_sec": 0 00:16:28.878 }, 00:16:28.878 "claimed": false, 00:16:28.878 "zoned": false, 00:16:28.878 "supported_io_types": { 00:16:28.878 "read": true, 00:16:28.878 "write": true, 00:16:28.878 "unmap": true, 00:16:28.878 "flush": true, 00:16:28.878 "reset": true, 00:16:28.878 "nvme_admin": false, 00:16:28.878 "nvme_io": false, 00:16:28.878 "nvme_io_md": false, 00:16:28.878 "write_zeroes": true, 00:16:28.878 "zcopy": true, 00:16:28.878 "get_zone_info": false, 00:16:28.878 "zone_management": false, 00:16:28.878 "zone_append": false, 00:16:28.878 "compare": false, 00:16:28.878 "compare_and_write": false, 00:16:28.878 "abort": true, 00:16:28.878 "seek_hole": false, 00:16:28.878 "seek_data": false, 00:16:28.878 "copy": true, 00:16:28.878 "nvme_iov_md": false 00:16:28.878 }, 00:16:28.878 "memory_domains": [ 00:16:28.878 { 00:16:28.878 "dma_device_id": "system", 00:16:28.878 "dma_device_type": 1 00:16:28.878 }, 00:16:28.878 { 00:16:28.878 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:28.878 "dma_device_type": 2 00:16:28.878 } 00:16:28.878 ], 00:16:28.878 "driver_specific": {} 00:16:28.878 } 00:16:28.878 ] 00:16:28.878 13:36:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.878 13:36:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:28.878 13:36:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:28.878 13:36:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:28.878 13:36:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:28.878 13:36:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.878 13:36:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:28.878 [2024-11-20 13:36:28.320840] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:28.878 [2024-11-20 13:36:28.321010] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:28.878 [2024-11-20 13:36:28.321123] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:28.878 [2024-11-20 13:36:28.323391] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:28.878 [2024-11-20 13:36:28.323589] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:28.878 13:36:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.878 13:36:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:16:28.878 13:36:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:28.878 13:36:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:28.878 13:36:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:16:28.878 13:36:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:28.878 13:36:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:28.878 13:36:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:28.878 13:36:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:28.878 13:36:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:28.878 13:36:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:28.878 13:36:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:28.878 13:36:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.878 13:36:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:28.878 13:36:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:29.138 13:36:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.138 13:36:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:29.138 "name": "Existed_Raid", 00:16:29.138 "uuid": "6279eb71-dff8-4bd5-b28d-28f0e51cff99", 00:16:29.138 "strip_size_kb": 64, 00:16:29.138 "state": "configuring", 00:16:29.138 "raid_level": "concat", 00:16:29.138 "superblock": true, 00:16:29.138 "num_base_bdevs": 4, 00:16:29.138 "num_base_bdevs_discovered": 3, 00:16:29.138 "num_base_bdevs_operational": 4, 00:16:29.138 "base_bdevs_list": [ 00:16:29.138 { 00:16:29.138 "name": "BaseBdev1", 00:16:29.138 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:29.138 "is_configured": false, 00:16:29.138 "data_offset": 0, 00:16:29.138 "data_size": 0 00:16:29.138 }, 00:16:29.138 { 00:16:29.138 "name": "BaseBdev2", 00:16:29.138 "uuid": "082bc4e8-7547-4e5a-9304-dfec5c23cbfa", 00:16:29.138 "is_configured": true, 00:16:29.138 "data_offset": 2048, 00:16:29.138 "data_size": 63488 00:16:29.138 }, 00:16:29.138 { 00:16:29.138 "name": "BaseBdev3", 00:16:29.138 "uuid": "7109c1d7-d0c8-4146-9d18-bbabf7840694", 00:16:29.138 "is_configured": true, 00:16:29.138 "data_offset": 2048, 00:16:29.138 "data_size": 63488 00:16:29.138 }, 00:16:29.138 { 00:16:29.138 "name": "BaseBdev4", 00:16:29.138 "uuid": "d416acd8-9b35-4910-a676-53c9e3bc46aa", 00:16:29.138 "is_configured": true, 00:16:29.138 "data_offset": 2048, 00:16:29.138 "data_size": 63488 00:16:29.138 } 00:16:29.138 ] 00:16:29.138 }' 00:16:29.138 13:36:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:29.138 13:36:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:29.398 13:36:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:16:29.398 13:36:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.398 13:36:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:29.398 [2024-11-20 13:36:28.732251] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:29.398 13:36:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.398 13:36:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:16:29.398 13:36:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:29.398 13:36:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:29.398 13:36:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:16:29.398 13:36:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:29.398 13:36:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:29.398 13:36:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:29.398 13:36:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:29.398 13:36:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:29.398 13:36:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:29.398 13:36:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:29.398 13:36:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:29.398 13:36:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.398 13:36:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:29.398 13:36:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.398 13:36:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:29.398 "name": "Existed_Raid", 00:16:29.398 "uuid": "6279eb71-dff8-4bd5-b28d-28f0e51cff99", 00:16:29.398 "strip_size_kb": 64, 00:16:29.398 "state": "configuring", 00:16:29.398 "raid_level": "concat", 00:16:29.398 "superblock": true, 00:16:29.398 "num_base_bdevs": 4, 00:16:29.398 "num_base_bdevs_discovered": 2, 00:16:29.398 "num_base_bdevs_operational": 4, 00:16:29.398 "base_bdevs_list": [ 00:16:29.398 { 00:16:29.398 "name": "BaseBdev1", 00:16:29.398 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:29.398 "is_configured": false, 00:16:29.398 "data_offset": 0, 00:16:29.398 "data_size": 0 00:16:29.398 }, 00:16:29.398 { 00:16:29.398 "name": null, 00:16:29.398 "uuid": "082bc4e8-7547-4e5a-9304-dfec5c23cbfa", 00:16:29.398 "is_configured": false, 00:16:29.398 "data_offset": 0, 00:16:29.398 "data_size": 63488 00:16:29.398 }, 00:16:29.398 { 00:16:29.398 "name": "BaseBdev3", 00:16:29.398 "uuid": "7109c1d7-d0c8-4146-9d18-bbabf7840694", 00:16:29.398 "is_configured": true, 00:16:29.398 "data_offset": 2048, 00:16:29.398 "data_size": 63488 00:16:29.398 }, 00:16:29.398 { 00:16:29.398 "name": "BaseBdev4", 00:16:29.398 "uuid": "d416acd8-9b35-4910-a676-53c9e3bc46aa", 00:16:29.398 "is_configured": true, 00:16:29.398 "data_offset": 2048, 00:16:29.398 "data_size": 63488 00:16:29.398 } 00:16:29.398 ] 00:16:29.398 }' 00:16:29.398 13:36:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:29.398 13:36:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:29.965 13:36:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:29.965 13:36:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:29.965 13:36:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.965 13:36:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:29.965 13:36:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.965 13:36:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:16:29.965 13:36:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:16:29.965 13:36:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.965 13:36:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:29.965 [2024-11-20 13:36:29.237504] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:29.965 BaseBdev1 00:16:29.965 13:36:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.965 13:36:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:16:29.965 13:36:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:16:29.965 13:36:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:29.965 13:36:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:29.965 13:36:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:29.965 13:36:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:29.965 13:36:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:29.965 13:36:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.965 13:36:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:29.965 13:36:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.965 13:36:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:29.965 13:36:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.965 13:36:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:29.965 [ 00:16:29.965 { 00:16:29.965 "name": "BaseBdev1", 00:16:29.965 "aliases": [ 00:16:29.965 "08a6d61d-1e62-4cd5-9501-cbbd3cbc8e1a" 00:16:29.965 ], 00:16:29.965 "product_name": "Malloc disk", 00:16:29.965 "block_size": 512, 00:16:29.965 "num_blocks": 65536, 00:16:29.965 "uuid": "08a6d61d-1e62-4cd5-9501-cbbd3cbc8e1a", 00:16:29.965 "assigned_rate_limits": { 00:16:29.965 "rw_ios_per_sec": 0, 00:16:29.965 "rw_mbytes_per_sec": 0, 00:16:29.965 "r_mbytes_per_sec": 0, 00:16:29.965 "w_mbytes_per_sec": 0 00:16:29.965 }, 00:16:29.965 "claimed": true, 00:16:29.965 "claim_type": "exclusive_write", 00:16:29.965 "zoned": false, 00:16:29.965 "supported_io_types": { 00:16:29.965 "read": true, 00:16:29.965 "write": true, 00:16:29.965 "unmap": true, 00:16:29.965 "flush": true, 00:16:29.965 "reset": true, 00:16:29.965 "nvme_admin": false, 00:16:29.965 "nvme_io": false, 00:16:29.965 "nvme_io_md": false, 00:16:29.965 "write_zeroes": true, 00:16:29.965 "zcopy": true, 00:16:29.965 "get_zone_info": false, 00:16:29.965 "zone_management": false, 00:16:29.965 "zone_append": false, 00:16:29.965 "compare": false, 00:16:29.965 "compare_and_write": false, 00:16:29.965 "abort": true, 00:16:29.965 "seek_hole": false, 00:16:29.965 "seek_data": false, 00:16:29.965 "copy": true, 00:16:29.965 "nvme_iov_md": false 00:16:29.965 }, 00:16:29.965 "memory_domains": [ 00:16:29.965 { 00:16:29.965 "dma_device_id": "system", 00:16:29.965 "dma_device_type": 1 00:16:29.965 }, 00:16:29.965 { 00:16:29.965 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:29.965 "dma_device_type": 2 00:16:29.965 } 00:16:29.965 ], 00:16:29.965 "driver_specific": {} 00:16:29.965 } 00:16:29.965 ] 00:16:29.965 13:36:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.965 13:36:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:29.965 13:36:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:16:29.965 13:36:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:29.965 13:36:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:29.965 13:36:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:16:29.965 13:36:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:29.965 13:36:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:29.965 13:36:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:29.965 13:36:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:29.965 13:36:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:29.965 13:36:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:29.965 13:36:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:29.965 13:36:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:29.965 13:36:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.965 13:36:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:29.965 13:36:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.965 13:36:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:29.965 "name": "Existed_Raid", 00:16:29.965 "uuid": "6279eb71-dff8-4bd5-b28d-28f0e51cff99", 00:16:29.965 "strip_size_kb": 64, 00:16:29.965 "state": "configuring", 00:16:29.965 "raid_level": "concat", 00:16:29.965 "superblock": true, 00:16:29.965 "num_base_bdevs": 4, 00:16:29.965 "num_base_bdevs_discovered": 3, 00:16:29.965 "num_base_bdevs_operational": 4, 00:16:29.965 "base_bdevs_list": [ 00:16:29.965 { 00:16:29.965 "name": "BaseBdev1", 00:16:29.965 "uuid": "08a6d61d-1e62-4cd5-9501-cbbd3cbc8e1a", 00:16:29.965 "is_configured": true, 00:16:29.965 "data_offset": 2048, 00:16:29.965 "data_size": 63488 00:16:29.965 }, 00:16:29.965 { 00:16:29.965 "name": null, 00:16:29.965 "uuid": "082bc4e8-7547-4e5a-9304-dfec5c23cbfa", 00:16:29.965 "is_configured": false, 00:16:29.965 "data_offset": 0, 00:16:29.965 "data_size": 63488 00:16:29.965 }, 00:16:29.965 { 00:16:29.965 "name": "BaseBdev3", 00:16:29.965 "uuid": "7109c1d7-d0c8-4146-9d18-bbabf7840694", 00:16:29.965 "is_configured": true, 00:16:29.965 "data_offset": 2048, 00:16:29.965 "data_size": 63488 00:16:29.965 }, 00:16:29.965 { 00:16:29.965 "name": "BaseBdev4", 00:16:29.965 "uuid": "d416acd8-9b35-4910-a676-53c9e3bc46aa", 00:16:29.965 "is_configured": true, 00:16:29.965 "data_offset": 2048, 00:16:29.965 "data_size": 63488 00:16:29.965 } 00:16:29.965 ] 00:16:29.965 }' 00:16:29.965 13:36:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:29.965 13:36:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:30.307 13:36:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:30.307 13:36:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:30.307 13:36:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.307 13:36:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:30.307 13:36:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.307 13:36:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:16:30.307 13:36:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:16:30.307 13:36:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.307 13:36:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:30.307 [2024-11-20 13:36:29.745232] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:30.307 13:36:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.307 13:36:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:16:30.307 13:36:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:30.307 13:36:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:30.307 13:36:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:16:30.307 13:36:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:30.307 13:36:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:30.307 13:36:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:30.307 13:36:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:30.307 13:36:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:30.307 13:36:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:30.307 13:36:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:30.307 13:36:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.307 13:36:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:30.307 13:36:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:30.566 13:36:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.566 13:36:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:30.566 "name": "Existed_Raid", 00:16:30.566 "uuid": "6279eb71-dff8-4bd5-b28d-28f0e51cff99", 00:16:30.566 "strip_size_kb": 64, 00:16:30.566 "state": "configuring", 00:16:30.566 "raid_level": "concat", 00:16:30.566 "superblock": true, 00:16:30.566 "num_base_bdevs": 4, 00:16:30.566 "num_base_bdevs_discovered": 2, 00:16:30.566 "num_base_bdevs_operational": 4, 00:16:30.566 "base_bdevs_list": [ 00:16:30.566 { 00:16:30.566 "name": "BaseBdev1", 00:16:30.566 "uuid": "08a6d61d-1e62-4cd5-9501-cbbd3cbc8e1a", 00:16:30.566 "is_configured": true, 00:16:30.566 "data_offset": 2048, 00:16:30.566 "data_size": 63488 00:16:30.566 }, 00:16:30.566 { 00:16:30.566 "name": null, 00:16:30.566 "uuid": "082bc4e8-7547-4e5a-9304-dfec5c23cbfa", 00:16:30.566 "is_configured": false, 00:16:30.566 "data_offset": 0, 00:16:30.566 "data_size": 63488 00:16:30.566 }, 00:16:30.566 { 00:16:30.566 "name": null, 00:16:30.566 "uuid": "7109c1d7-d0c8-4146-9d18-bbabf7840694", 00:16:30.566 "is_configured": false, 00:16:30.566 "data_offset": 0, 00:16:30.566 "data_size": 63488 00:16:30.566 }, 00:16:30.566 { 00:16:30.566 "name": "BaseBdev4", 00:16:30.566 "uuid": "d416acd8-9b35-4910-a676-53c9e3bc46aa", 00:16:30.566 "is_configured": true, 00:16:30.566 "data_offset": 2048, 00:16:30.566 "data_size": 63488 00:16:30.566 } 00:16:30.566 ] 00:16:30.566 }' 00:16:30.566 13:36:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:30.566 13:36:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:30.824 13:36:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:30.824 13:36:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:30.824 13:36:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.824 13:36:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:30.824 13:36:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.824 13:36:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:16:30.824 13:36:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:16:30.824 13:36:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.824 13:36:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:30.824 [2024-11-20 13:36:30.224910] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:30.824 13:36:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.824 13:36:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:16:30.824 13:36:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:30.824 13:36:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:30.824 13:36:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:16:30.824 13:36:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:30.824 13:36:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:30.824 13:36:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:30.824 13:36:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:30.824 13:36:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:30.824 13:36:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:30.824 13:36:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:30.824 13:36:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.824 13:36:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:30.824 13:36:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:30.824 13:36:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.824 13:36:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:30.824 "name": "Existed_Raid", 00:16:30.824 "uuid": "6279eb71-dff8-4bd5-b28d-28f0e51cff99", 00:16:30.824 "strip_size_kb": 64, 00:16:30.824 "state": "configuring", 00:16:30.824 "raid_level": "concat", 00:16:30.824 "superblock": true, 00:16:30.824 "num_base_bdevs": 4, 00:16:30.824 "num_base_bdevs_discovered": 3, 00:16:30.824 "num_base_bdevs_operational": 4, 00:16:30.824 "base_bdevs_list": [ 00:16:30.824 { 00:16:30.824 "name": "BaseBdev1", 00:16:30.824 "uuid": "08a6d61d-1e62-4cd5-9501-cbbd3cbc8e1a", 00:16:30.824 "is_configured": true, 00:16:30.824 "data_offset": 2048, 00:16:30.824 "data_size": 63488 00:16:30.824 }, 00:16:30.824 { 00:16:30.824 "name": null, 00:16:30.824 "uuid": "082bc4e8-7547-4e5a-9304-dfec5c23cbfa", 00:16:30.824 "is_configured": false, 00:16:30.824 "data_offset": 0, 00:16:30.824 "data_size": 63488 00:16:30.824 }, 00:16:30.824 { 00:16:30.824 "name": "BaseBdev3", 00:16:30.824 "uuid": "7109c1d7-d0c8-4146-9d18-bbabf7840694", 00:16:30.824 "is_configured": true, 00:16:30.824 "data_offset": 2048, 00:16:30.824 "data_size": 63488 00:16:30.824 }, 00:16:30.824 { 00:16:30.824 "name": "BaseBdev4", 00:16:30.824 "uuid": "d416acd8-9b35-4910-a676-53c9e3bc46aa", 00:16:30.824 "is_configured": true, 00:16:30.824 "data_offset": 2048, 00:16:30.824 "data_size": 63488 00:16:30.824 } 00:16:30.824 ] 00:16:30.824 }' 00:16:30.824 13:36:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:30.824 13:36:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:31.392 13:36:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:31.392 13:36:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.392 13:36:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:31.392 13:36:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:31.392 13:36:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.392 13:36:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:16:31.393 13:36:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:31.393 13:36:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.393 13:36:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:31.393 [2024-11-20 13:36:30.728261] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:31.393 13:36:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.393 13:36:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:16:31.393 13:36:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:31.393 13:36:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:31.393 13:36:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:16:31.393 13:36:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:31.393 13:36:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:31.393 13:36:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:31.393 13:36:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:31.393 13:36:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:31.393 13:36:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:31.393 13:36:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:31.393 13:36:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:31.393 13:36:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.393 13:36:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:31.393 13:36:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.393 13:36:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:31.393 "name": "Existed_Raid", 00:16:31.393 "uuid": "6279eb71-dff8-4bd5-b28d-28f0e51cff99", 00:16:31.393 "strip_size_kb": 64, 00:16:31.393 "state": "configuring", 00:16:31.393 "raid_level": "concat", 00:16:31.393 "superblock": true, 00:16:31.393 "num_base_bdevs": 4, 00:16:31.393 "num_base_bdevs_discovered": 2, 00:16:31.393 "num_base_bdevs_operational": 4, 00:16:31.393 "base_bdevs_list": [ 00:16:31.393 { 00:16:31.393 "name": null, 00:16:31.393 "uuid": "08a6d61d-1e62-4cd5-9501-cbbd3cbc8e1a", 00:16:31.393 "is_configured": false, 00:16:31.393 "data_offset": 0, 00:16:31.393 "data_size": 63488 00:16:31.393 }, 00:16:31.393 { 00:16:31.393 "name": null, 00:16:31.393 "uuid": "082bc4e8-7547-4e5a-9304-dfec5c23cbfa", 00:16:31.393 "is_configured": false, 00:16:31.393 "data_offset": 0, 00:16:31.393 "data_size": 63488 00:16:31.393 }, 00:16:31.393 { 00:16:31.393 "name": "BaseBdev3", 00:16:31.393 "uuid": "7109c1d7-d0c8-4146-9d18-bbabf7840694", 00:16:31.393 "is_configured": true, 00:16:31.393 "data_offset": 2048, 00:16:31.393 "data_size": 63488 00:16:31.393 }, 00:16:31.393 { 00:16:31.393 "name": "BaseBdev4", 00:16:31.393 "uuid": "d416acd8-9b35-4910-a676-53c9e3bc46aa", 00:16:31.393 "is_configured": true, 00:16:31.393 "data_offset": 2048, 00:16:31.393 "data_size": 63488 00:16:31.393 } 00:16:31.393 ] 00:16:31.393 }' 00:16:31.393 13:36:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:31.393 13:36:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:31.961 13:36:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:31.961 13:36:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:31.961 13:36:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.961 13:36:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:31.961 13:36:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.961 13:36:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:16:31.961 13:36:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:16:31.961 13:36:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.961 13:36:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:31.961 [2024-11-20 13:36:31.289259] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:31.961 13:36:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.961 13:36:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:16:31.961 13:36:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:31.961 13:36:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:31.961 13:36:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:16:31.961 13:36:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:31.961 13:36:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:31.961 13:36:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:31.961 13:36:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:31.961 13:36:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:31.961 13:36:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:31.961 13:36:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:31.961 13:36:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:31.961 13:36:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.961 13:36:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:31.961 13:36:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.961 13:36:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:31.961 "name": "Existed_Raid", 00:16:31.961 "uuid": "6279eb71-dff8-4bd5-b28d-28f0e51cff99", 00:16:31.961 "strip_size_kb": 64, 00:16:31.961 "state": "configuring", 00:16:31.961 "raid_level": "concat", 00:16:31.961 "superblock": true, 00:16:31.961 "num_base_bdevs": 4, 00:16:31.961 "num_base_bdevs_discovered": 3, 00:16:31.961 "num_base_bdevs_operational": 4, 00:16:31.961 "base_bdevs_list": [ 00:16:31.961 { 00:16:31.961 "name": null, 00:16:31.961 "uuid": "08a6d61d-1e62-4cd5-9501-cbbd3cbc8e1a", 00:16:31.961 "is_configured": false, 00:16:31.961 "data_offset": 0, 00:16:31.961 "data_size": 63488 00:16:31.961 }, 00:16:31.961 { 00:16:31.961 "name": "BaseBdev2", 00:16:31.961 "uuid": "082bc4e8-7547-4e5a-9304-dfec5c23cbfa", 00:16:31.961 "is_configured": true, 00:16:31.961 "data_offset": 2048, 00:16:31.961 "data_size": 63488 00:16:31.961 }, 00:16:31.961 { 00:16:31.961 "name": "BaseBdev3", 00:16:31.961 "uuid": "7109c1d7-d0c8-4146-9d18-bbabf7840694", 00:16:31.961 "is_configured": true, 00:16:31.961 "data_offset": 2048, 00:16:31.961 "data_size": 63488 00:16:31.961 }, 00:16:31.961 { 00:16:31.961 "name": "BaseBdev4", 00:16:31.961 "uuid": "d416acd8-9b35-4910-a676-53c9e3bc46aa", 00:16:31.961 "is_configured": true, 00:16:31.961 "data_offset": 2048, 00:16:31.961 "data_size": 63488 00:16:31.961 } 00:16:31.961 ] 00:16:31.961 }' 00:16:31.961 13:36:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:31.961 13:36:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:32.220 13:36:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:32.220 13:36:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.220 13:36:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:32.220 13:36:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:32.220 13:36:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.480 13:36:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:16:32.480 13:36:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:16:32.480 13:36:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:32.480 13:36:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.480 13:36:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:32.480 13:36:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.480 13:36:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 08a6d61d-1e62-4cd5-9501-cbbd3cbc8e1a 00:16:32.480 13:36:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.480 13:36:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:32.480 [2024-11-20 13:36:31.783300] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:16:32.480 [2024-11-20 13:36:31.783564] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:16:32.480 [2024-11-20 13:36:31.783580] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:16:32.480 [2024-11-20 13:36:31.783843] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:16:32.480 [2024-11-20 13:36:31.783970] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:16:32.480 [2024-11-20 13:36:31.783983] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:16:32.480 NewBaseBdev 00:16:32.480 [2024-11-20 13:36:31.784121] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:32.480 13:36:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.480 13:36:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:16:32.480 13:36:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:16:32.480 13:36:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:32.480 13:36:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:32.480 13:36:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:32.480 13:36:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:32.480 13:36:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:32.480 13:36:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.480 13:36:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:32.480 13:36:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.480 13:36:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:16:32.480 13:36:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.480 13:36:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:32.480 [ 00:16:32.480 { 00:16:32.480 "name": "NewBaseBdev", 00:16:32.480 "aliases": [ 00:16:32.480 "08a6d61d-1e62-4cd5-9501-cbbd3cbc8e1a" 00:16:32.480 ], 00:16:32.480 "product_name": "Malloc disk", 00:16:32.480 "block_size": 512, 00:16:32.480 "num_blocks": 65536, 00:16:32.480 "uuid": "08a6d61d-1e62-4cd5-9501-cbbd3cbc8e1a", 00:16:32.480 "assigned_rate_limits": { 00:16:32.480 "rw_ios_per_sec": 0, 00:16:32.480 "rw_mbytes_per_sec": 0, 00:16:32.480 "r_mbytes_per_sec": 0, 00:16:32.480 "w_mbytes_per_sec": 0 00:16:32.480 }, 00:16:32.480 "claimed": true, 00:16:32.480 "claim_type": "exclusive_write", 00:16:32.480 "zoned": false, 00:16:32.480 "supported_io_types": { 00:16:32.480 "read": true, 00:16:32.480 "write": true, 00:16:32.480 "unmap": true, 00:16:32.480 "flush": true, 00:16:32.480 "reset": true, 00:16:32.480 "nvme_admin": false, 00:16:32.480 "nvme_io": false, 00:16:32.480 "nvme_io_md": false, 00:16:32.480 "write_zeroes": true, 00:16:32.480 "zcopy": true, 00:16:32.481 "get_zone_info": false, 00:16:32.481 "zone_management": false, 00:16:32.481 "zone_append": false, 00:16:32.481 "compare": false, 00:16:32.481 "compare_and_write": false, 00:16:32.481 "abort": true, 00:16:32.481 "seek_hole": false, 00:16:32.481 "seek_data": false, 00:16:32.481 "copy": true, 00:16:32.481 "nvme_iov_md": false 00:16:32.481 }, 00:16:32.481 "memory_domains": [ 00:16:32.481 { 00:16:32.481 "dma_device_id": "system", 00:16:32.481 "dma_device_type": 1 00:16:32.481 }, 00:16:32.481 { 00:16:32.481 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:32.481 "dma_device_type": 2 00:16:32.481 } 00:16:32.481 ], 00:16:32.481 "driver_specific": {} 00:16:32.481 } 00:16:32.481 ] 00:16:32.481 13:36:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.481 13:36:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:32.481 13:36:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:16:32.481 13:36:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:32.481 13:36:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:32.481 13:36:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:16:32.481 13:36:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:32.481 13:36:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:32.481 13:36:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:32.481 13:36:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:32.481 13:36:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:32.481 13:36:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:32.481 13:36:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:32.481 13:36:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:32.481 13:36:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.481 13:36:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:32.481 13:36:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.481 13:36:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:32.481 "name": "Existed_Raid", 00:16:32.481 "uuid": "6279eb71-dff8-4bd5-b28d-28f0e51cff99", 00:16:32.481 "strip_size_kb": 64, 00:16:32.481 "state": "online", 00:16:32.481 "raid_level": "concat", 00:16:32.481 "superblock": true, 00:16:32.481 "num_base_bdevs": 4, 00:16:32.481 "num_base_bdevs_discovered": 4, 00:16:32.481 "num_base_bdevs_operational": 4, 00:16:32.481 "base_bdevs_list": [ 00:16:32.481 { 00:16:32.481 "name": "NewBaseBdev", 00:16:32.481 "uuid": "08a6d61d-1e62-4cd5-9501-cbbd3cbc8e1a", 00:16:32.481 "is_configured": true, 00:16:32.481 "data_offset": 2048, 00:16:32.481 "data_size": 63488 00:16:32.481 }, 00:16:32.481 { 00:16:32.481 "name": "BaseBdev2", 00:16:32.481 "uuid": "082bc4e8-7547-4e5a-9304-dfec5c23cbfa", 00:16:32.481 "is_configured": true, 00:16:32.481 "data_offset": 2048, 00:16:32.481 "data_size": 63488 00:16:32.481 }, 00:16:32.481 { 00:16:32.481 "name": "BaseBdev3", 00:16:32.481 "uuid": "7109c1d7-d0c8-4146-9d18-bbabf7840694", 00:16:32.481 "is_configured": true, 00:16:32.481 "data_offset": 2048, 00:16:32.481 "data_size": 63488 00:16:32.481 }, 00:16:32.481 { 00:16:32.481 "name": "BaseBdev4", 00:16:32.481 "uuid": "d416acd8-9b35-4910-a676-53c9e3bc46aa", 00:16:32.481 "is_configured": true, 00:16:32.481 "data_offset": 2048, 00:16:32.481 "data_size": 63488 00:16:32.481 } 00:16:32.481 ] 00:16:32.481 }' 00:16:32.481 13:36:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:32.481 13:36:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:33.050 13:36:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:16:33.050 13:36:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:33.050 13:36:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:33.050 13:36:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:33.050 13:36:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:16:33.050 13:36:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:33.050 13:36:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:33.050 13:36:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:33.050 13:36:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.050 13:36:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:33.050 [2024-11-20 13:36:32.239113] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:33.050 13:36:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.050 13:36:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:33.050 "name": "Existed_Raid", 00:16:33.050 "aliases": [ 00:16:33.050 "6279eb71-dff8-4bd5-b28d-28f0e51cff99" 00:16:33.050 ], 00:16:33.050 "product_name": "Raid Volume", 00:16:33.050 "block_size": 512, 00:16:33.050 "num_blocks": 253952, 00:16:33.050 "uuid": "6279eb71-dff8-4bd5-b28d-28f0e51cff99", 00:16:33.050 "assigned_rate_limits": { 00:16:33.050 "rw_ios_per_sec": 0, 00:16:33.050 "rw_mbytes_per_sec": 0, 00:16:33.050 "r_mbytes_per_sec": 0, 00:16:33.050 "w_mbytes_per_sec": 0 00:16:33.050 }, 00:16:33.050 "claimed": false, 00:16:33.050 "zoned": false, 00:16:33.050 "supported_io_types": { 00:16:33.050 "read": true, 00:16:33.050 "write": true, 00:16:33.050 "unmap": true, 00:16:33.050 "flush": true, 00:16:33.050 "reset": true, 00:16:33.050 "nvme_admin": false, 00:16:33.050 "nvme_io": false, 00:16:33.050 "nvme_io_md": false, 00:16:33.050 "write_zeroes": true, 00:16:33.050 "zcopy": false, 00:16:33.050 "get_zone_info": false, 00:16:33.050 "zone_management": false, 00:16:33.050 "zone_append": false, 00:16:33.050 "compare": false, 00:16:33.050 "compare_and_write": false, 00:16:33.050 "abort": false, 00:16:33.050 "seek_hole": false, 00:16:33.050 "seek_data": false, 00:16:33.050 "copy": false, 00:16:33.050 "nvme_iov_md": false 00:16:33.050 }, 00:16:33.050 "memory_domains": [ 00:16:33.050 { 00:16:33.050 "dma_device_id": "system", 00:16:33.050 "dma_device_type": 1 00:16:33.050 }, 00:16:33.050 { 00:16:33.050 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:33.050 "dma_device_type": 2 00:16:33.050 }, 00:16:33.050 { 00:16:33.050 "dma_device_id": "system", 00:16:33.050 "dma_device_type": 1 00:16:33.050 }, 00:16:33.050 { 00:16:33.050 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:33.050 "dma_device_type": 2 00:16:33.050 }, 00:16:33.050 { 00:16:33.050 "dma_device_id": "system", 00:16:33.050 "dma_device_type": 1 00:16:33.050 }, 00:16:33.050 { 00:16:33.050 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:33.050 "dma_device_type": 2 00:16:33.050 }, 00:16:33.050 { 00:16:33.050 "dma_device_id": "system", 00:16:33.050 "dma_device_type": 1 00:16:33.050 }, 00:16:33.050 { 00:16:33.050 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:33.050 "dma_device_type": 2 00:16:33.050 } 00:16:33.050 ], 00:16:33.050 "driver_specific": { 00:16:33.050 "raid": { 00:16:33.050 "uuid": "6279eb71-dff8-4bd5-b28d-28f0e51cff99", 00:16:33.050 "strip_size_kb": 64, 00:16:33.050 "state": "online", 00:16:33.050 "raid_level": "concat", 00:16:33.050 "superblock": true, 00:16:33.050 "num_base_bdevs": 4, 00:16:33.050 "num_base_bdevs_discovered": 4, 00:16:33.050 "num_base_bdevs_operational": 4, 00:16:33.050 "base_bdevs_list": [ 00:16:33.050 { 00:16:33.050 "name": "NewBaseBdev", 00:16:33.050 "uuid": "08a6d61d-1e62-4cd5-9501-cbbd3cbc8e1a", 00:16:33.050 "is_configured": true, 00:16:33.050 "data_offset": 2048, 00:16:33.050 "data_size": 63488 00:16:33.050 }, 00:16:33.050 { 00:16:33.050 "name": "BaseBdev2", 00:16:33.050 "uuid": "082bc4e8-7547-4e5a-9304-dfec5c23cbfa", 00:16:33.050 "is_configured": true, 00:16:33.050 "data_offset": 2048, 00:16:33.050 "data_size": 63488 00:16:33.050 }, 00:16:33.050 { 00:16:33.050 "name": "BaseBdev3", 00:16:33.050 "uuid": "7109c1d7-d0c8-4146-9d18-bbabf7840694", 00:16:33.050 "is_configured": true, 00:16:33.050 "data_offset": 2048, 00:16:33.050 "data_size": 63488 00:16:33.050 }, 00:16:33.050 { 00:16:33.050 "name": "BaseBdev4", 00:16:33.050 "uuid": "d416acd8-9b35-4910-a676-53c9e3bc46aa", 00:16:33.050 "is_configured": true, 00:16:33.050 "data_offset": 2048, 00:16:33.050 "data_size": 63488 00:16:33.050 } 00:16:33.050 ] 00:16:33.050 } 00:16:33.050 } 00:16:33.050 }' 00:16:33.050 13:36:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:33.050 13:36:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:16:33.050 BaseBdev2 00:16:33.050 BaseBdev3 00:16:33.050 BaseBdev4' 00:16:33.050 13:36:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:33.050 13:36:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:33.050 13:36:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:33.050 13:36:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:16:33.050 13:36:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:33.050 13:36:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.050 13:36:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:33.050 13:36:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.050 13:36:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:33.050 13:36:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:33.050 13:36:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:33.050 13:36:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:33.050 13:36:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:33.050 13:36:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.051 13:36:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:33.051 13:36:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.051 13:36:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:33.051 13:36:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:33.051 13:36:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:33.051 13:36:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:16:33.051 13:36:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.051 13:36:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:33.051 13:36:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:33.051 13:36:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.051 13:36:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:33.051 13:36:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:33.051 13:36:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:33.051 13:36:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:33.051 13:36:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:16:33.051 13:36:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.051 13:36:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:33.051 13:36:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.051 13:36:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:33.051 13:36:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:33.051 13:36:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:33.051 13:36:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.051 13:36:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:33.051 [2024-11-20 13:36:32.522410] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:33.051 [2024-11-20 13:36:32.522443] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:33.051 [2024-11-20 13:36:32.522513] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:33.051 [2024-11-20 13:36:32.522581] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:33.051 [2024-11-20 13:36:32.522593] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:16:33.051 13:36:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.051 13:36:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 71707 00:16:33.051 13:36:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 71707 ']' 00:16:33.051 13:36:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 71707 00:16:33.051 13:36:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:16:33.051 13:36:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:33.310 13:36:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71707 00:16:33.310 killing process with pid 71707 00:16:33.310 13:36:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:33.310 13:36:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:33.310 13:36:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71707' 00:16:33.310 13:36:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 71707 00:16:33.310 [2024-11-20 13:36:32.571825] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:33.310 13:36:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 71707 00:16:33.568 [2024-11-20 13:36:32.975563] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:34.958 13:36:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:16:34.958 00:16:34.958 real 0m11.252s 00:16:34.958 user 0m17.814s 00:16:34.958 sys 0m2.237s 00:16:34.958 13:36:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:34.958 ************************************ 00:16:34.958 END TEST raid_state_function_test_sb 00:16:34.958 ************************************ 00:16:34.958 13:36:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:34.958 13:36:34 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 4 00:16:34.958 13:36:34 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:16:34.958 13:36:34 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:34.958 13:36:34 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:34.958 ************************************ 00:16:34.958 START TEST raid_superblock_test 00:16:34.958 ************************************ 00:16:34.958 13:36:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test concat 4 00:16:34.958 13:36:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:16:34.958 13:36:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:16:34.958 13:36:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:16:34.958 13:36:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:16:34.958 13:36:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:16:34.958 13:36:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:16:34.958 13:36:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:16:34.958 13:36:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:16:34.958 13:36:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:16:34.958 13:36:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:16:34.959 13:36:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:16:34.959 13:36:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:16:34.959 13:36:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:16:34.959 13:36:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:16:34.959 13:36:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:16:34.959 13:36:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:16:34.959 13:36:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=72374 00:16:34.959 13:36:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:16:34.959 13:36:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 72374 00:16:34.959 13:36:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 72374 ']' 00:16:34.959 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:34.959 13:36:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:34.959 13:36:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:34.959 13:36:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:34.959 13:36:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:34.959 13:36:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:34.959 [2024-11-20 13:36:34.281632] Starting SPDK v25.01-pre git sha1 82b85d9ca / DPDK 24.03.0 initialization... 00:16:34.959 [2024-11-20 13:36:34.281762] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72374 ] 00:16:35.218 [2024-11-20 13:36:34.451514] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:35.218 [2024-11-20 13:36:34.564779] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:35.478 [2024-11-20 13:36:34.767726] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:35.478 [2024-11-20 13:36:34.767763] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:35.738 13:36:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:35.738 13:36:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:16:35.738 13:36:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:16:35.738 13:36:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:35.738 13:36:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:16:35.738 13:36:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:16:35.738 13:36:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:16:35.738 13:36:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:35.738 13:36:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:35.738 13:36:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:35.738 13:36:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:16:35.738 13:36:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.738 13:36:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:35.738 malloc1 00:16:35.738 13:36:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.738 13:36:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:35.738 13:36:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.738 13:36:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:35.738 [2024-11-20 13:36:35.167948] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:35.738 [2024-11-20 13:36:35.168152] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:35.738 [2024-11-20 13:36:35.168214] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:35.738 [2024-11-20 13:36:35.168303] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:35.738 [2024-11-20 13:36:35.170693] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:35.738 [2024-11-20 13:36:35.170840] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:35.738 pt1 00:16:35.738 13:36:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.738 13:36:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:35.738 13:36:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:35.738 13:36:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:16:35.738 13:36:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:16:35.738 13:36:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:16:35.738 13:36:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:35.738 13:36:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:35.738 13:36:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:35.738 13:36:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:16:35.738 13:36:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.738 13:36:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:35.738 malloc2 00:16:35.738 13:36:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.738 13:36:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:35.738 13:36:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.738 13:36:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:35.738 [2024-11-20 13:36:35.220285] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:35.738 [2024-11-20 13:36:35.220454] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:35.738 [2024-11-20 13:36:35.220517] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:35.738 [2024-11-20 13:36:35.220585] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:35.998 [2024-11-20 13:36:35.222965] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:35.998 [2024-11-20 13:36:35.223116] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:35.998 pt2 00:16:35.998 13:36:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.998 13:36:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:35.998 13:36:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:35.998 13:36:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:16:35.998 13:36:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:16:35.998 13:36:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:16:35.998 13:36:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:35.998 13:36:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:35.998 13:36:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:35.998 13:36:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:16:35.998 13:36:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.998 13:36:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:35.998 malloc3 00:16:35.998 13:36:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.998 13:36:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:35.998 13:36:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.998 13:36:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:35.998 [2024-11-20 13:36:35.294921] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:35.998 [2024-11-20 13:36:35.295095] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:35.998 [2024-11-20 13:36:35.295156] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:16:35.998 [2024-11-20 13:36:35.295231] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:35.998 [2024-11-20 13:36:35.297614] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:35.998 [2024-11-20 13:36:35.297650] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:35.998 pt3 00:16:35.998 13:36:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.998 13:36:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:35.998 13:36:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:35.998 13:36:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:16:35.998 13:36:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:16:35.998 13:36:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:16:35.998 13:36:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:35.998 13:36:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:35.998 13:36:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:35.998 13:36:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:16:35.998 13:36:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.998 13:36:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:35.998 malloc4 00:16:35.998 13:36:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.998 13:36:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:16:35.998 13:36:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.998 13:36:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:35.998 [2024-11-20 13:36:35.353755] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:16:35.998 [2024-11-20 13:36:35.353817] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:35.998 [2024-11-20 13:36:35.353841] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:16:35.998 [2024-11-20 13:36:35.353853] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:35.998 [2024-11-20 13:36:35.356181] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:35.998 [2024-11-20 13:36:35.356217] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:16:35.998 pt4 00:16:35.998 13:36:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.998 13:36:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:35.998 13:36:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:35.998 13:36:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:16:35.998 13:36:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.998 13:36:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:35.998 [2024-11-20 13:36:35.365777] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:35.998 [2024-11-20 13:36:35.367905] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:35.998 [2024-11-20 13:36:35.367991] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:35.998 [2024-11-20 13:36:35.368035] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:16:35.998 [2024-11-20 13:36:35.368231] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:16:35.999 [2024-11-20 13:36:35.368244] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:16:35.999 [2024-11-20 13:36:35.368504] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:16:35.999 [2024-11-20 13:36:35.368654] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:16:35.999 [2024-11-20 13:36:35.368668] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:16:35.999 [2024-11-20 13:36:35.368815] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:35.999 13:36:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.999 13:36:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:16:35.999 13:36:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:35.999 13:36:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:35.999 13:36:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:16:35.999 13:36:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:35.999 13:36:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:35.999 13:36:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:35.999 13:36:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:35.999 13:36:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:35.999 13:36:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:35.999 13:36:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:35.999 13:36:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:35.999 13:36:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.999 13:36:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:35.999 13:36:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.999 13:36:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:35.999 "name": "raid_bdev1", 00:16:35.999 "uuid": "5b8a4697-3db1-4381-b507-b1b87fee0e54", 00:16:35.999 "strip_size_kb": 64, 00:16:35.999 "state": "online", 00:16:35.999 "raid_level": "concat", 00:16:35.999 "superblock": true, 00:16:35.999 "num_base_bdevs": 4, 00:16:35.999 "num_base_bdevs_discovered": 4, 00:16:35.999 "num_base_bdevs_operational": 4, 00:16:35.999 "base_bdevs_list": [ 00:16:35.999 { 00:16:35.999 "name": "pt1", 00:16:35.999 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:35.999 "is_configured": true, 00:16:35.999 "data_offset": 2048, 00:16:35.999 "data_size": 63488 00:16:35.999 }, 00:16:35.999 { 00:16:35.999 "name": "pt2", 00:16:35.999 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:35.999 "is_configured": true, 00:16:35.999 "data_offset": 2048, 00:16:35.999 "data_size": 63488 00:16:35.999 }, 00:16:35.999 { 00:16:35.999 "name": "pt3", 00:16:35.999 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:35.999 "is_configured": true, 00:16:35.999 "data_offset": 2048, 00:16:35.999 "data_size": 63488 00:16:35.999 }, 00:16:35.999 { 00:16:35.999 "name": "pt4", 00:16:35.999 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:35.999 "is_configured": true, 00:16:35.999 "data_offset": 2048, 00:16:35.999 "data_size": 63488 00:16:35.999 } 00:16:35.999 ] 00:16:35.999 }' 00:16:35.999 13:36:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:35.999 13:36:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:36.567 13:36:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:16:36.567 13:36:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:16:36.567 13:36:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:36.567 13:36:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:36.567 13:36:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:16:36.567 13:36:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:36.567 13:36:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:36.567 13:36:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:36.567 13:36:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.567 13:36:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:36.567 [2024-11-20 13:36:35.821467] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:36.567 13:36:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.567 13:36:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:36.567 "name": "raid_bdev1", 00:16:36.567 "aliases": [ 00:16:36.567 "5b8a4697-3db1-4381-b507-b1b87fee0e54" 00:16:36.567 ], 00:16:36.567 "product_name": "Raid Volume", 00:16:36.567 "block_size": 512, 00:16:36.567 "num_blocks": 253952, 00:16:36.567 "uuid": "5b8a4697-3db1-4381-b507-b1b87fee0e54", 00:16:36.567 "assigned_rate_limits": { 00:16:36.567 "rw_ios_per_sec": 0, 00:16:36.567 "rw_mbytes_per_sec": 0, 00:16:36.567 "r_mbytes_per_sec": 0, 00:16:36.567 "w_mbytes_per_sec": 0 00:16:36.567 }, 00:16:36.567 "claimed": false, 00:16:36.567 "zoned": false, 00:16:36.567 "supported_io_types": { 00:16:36.567 "read": true, 00:16:36.567 "write": true, 00:16:36.567 "unmap": true, 00:16:36.567 "flush": true, 00:16:36.567 "reset": true, 00:16:36.567 "nvme_admin": false, 00:16:36.567 "nvme_io": false, 00:16:36.567 "nvme_io_md": false, 00:16:36.567 "write_zeroes": true, 00:16:36.567 "zcopy": false, 00:16:36.567 "get_zone_info": false, 00:16:36.568 "zone_management": false, 00:16:36.568 "zone_append": false, 00:16:36.568 "compare": false, 00:16:36.568 "compare_and_write": false, 00:16:36.568 "abort": false, 00:16:36.568 "seek_hole": false, 00:16:36.568 "seek_data": false, 00:16:36.568 "copy": false, 00:16:36.568 "nvme_iov_md": false 00:16:36.568 }, 00:16:36.568 "memory_domains": [ 00:16:36.568 { 00:16:36.568 "dma_device_id": "system", 00:16:36.568 "dma_device_type": 1 00:16:36.568 }, 00:16:36.568 { 00:16:36.568 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:36.568 "dma_device_type": 2 00:16:36.568 }, 00:16:36.568 { 00:16:36.568 "dma_device_id": "system", 00:16:36.568 "dma_device_type": 1 00:16:36.568 }, 00:16:36.568 { 00:16:36.568 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:36.568 "dma_device_type": 2 00:16:36.568 }, 00:16:36.568 { 00:16:36.568 "dma_device_id": "system", 00:16:36.568 "dma_device_type": 1 00:16:36.568 }, 00:16:36.568 { 00:16:36.568 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:36.568 "dma_device_type": 2 00:16:36.568 }, 00:16:36.568 { 00:16:36.568 "dma_device_id": "system", 00:16:36.568 "dma_device_type": 1 00:16:36.568 }, 00:16:36.568 { 00:16:36.568 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:36.568 "dma_device_type": 2 00:16:36.568 } 00:16:36.568 ], 00:16:36.568 "driver_specific": { 00:16:36.568 "raid": { 00:16:36.568 "uuid": "5b8a4697-3db1-4381-b507-b1b87fee0e54", 00:16:36.568 "strip_size_kb": 64, 00:16:36.568 "state": "online", 00:16:36.568 "raid_level": "concat", 00:16:36.568 "superblock": true, 00:16:36.568 "num_base_bdevs": 4, 00:16:36.568 "num_base_bdevs_discovered": 4, 00:16:36.568 "num_base_bdevs_operational": 4, 00:16:36.568 "base_bdevs_list": [ 00:16:36.568 { 00:16:36.568 "name": "pt1", 00:16:36.568 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:36.568 "is_configured": true, 00:16:36.568 "data_offset": 2048, 00:16:36.568 "data_size": 63488 00:16:36.568 }, 00:16:36.568 { 00:16:36.568 "name": "pt2", 00:16:36.568 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:36.568 "is_configured": true, 00:16:36.568 "data_offset": 2048, 00:16:36.568 "data_size": 63488 00:16:36.568 }, 00:16:36.568 { 00:16:36.568 "name": "pt3", 00:16:36.568 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:36.568 "is_configured": true, 00:16:36.568 "data_offset": 2048, 00:16:36.568 "data_size": 63488 00:16:36.568 }, 00:16:36.568 { 00:16:36.568 "name": "pt4", 00:16:36.568 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:36.568 "is_configured": true, 00:16:36.568 "data_offset": 2048, 00:16:36.568 "data_size": 63488 00:16:36.568 } 00:16:36.568 ] 00:16:36.568 } 00:16:36.568 } 00:16:36.568 }' 00:16:36.568 13:36:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:36.568 13:36:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:16:36.568 pt2 00:16:36.568 pt3 00:16:36.568 pt4' 00:16:36.568 13:36:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:36.568 13:36:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:36.568 13:36:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:36.568 13:36:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:16:36.568 13:36:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:36.568 13:36:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.568 13:36:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:36.568 13:36:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.568 13:36:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:36.568 13:36:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:36.568 13:36:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:36.568 13:36:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:16:36.568 13:36:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.568 13:36:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:36.568 13:36:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:36.568 13:36:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.568 13:36:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:36.568 13:36:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:36.568 13:36:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:36.568 13:36:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:16:36.568 13:36:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.568 13:36:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:36.568 13:36:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:36.828 13:36:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.828 13:36:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:36.828 13:36:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:36.828 13:36:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:36.828 13:36:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:16:36.828 13:36:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:36.828 13:36:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.828 13:36:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:36.828 13:36:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.828 13:36:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:36.828 13:36:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:36.828 13:36:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:16:36.828 13:36:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:36.828 13:36:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.828 13:36:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:36.828 [2024-11-20 13:36:36.148946] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:36.828 13:36:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.828 13:36:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=5b8a4697-3db1-4381-b507-b1b87fee0e54 00:16:36.828 13:36:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 5b8a4697-3db1-4381-b507-b1b87fee0e54 ']' 00:16:36.829 13:36:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:36.829 13:36:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.829 13:36:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:36.829 [2024-11-20 13:36:36.192590] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:36.829 [2024-11-20 13:36:36.192617] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:36.829 [2024-11-20 13:36:36.192696] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:36.829 [2024-11-20 13:36:36.192764] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:36.829 [2024-11-20 13:36:36.192781] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:16:36.829 13:36:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.829 13:36:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:16:36.829 13:36:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:36.829 13:36:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.829 13:36:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:36.829 13:36:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.829 13:36:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:16:36.829 13:36:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:16:36.829 13:36:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:36.829 13:36:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:16:36.829 13:36:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.829 13:36:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:36.829 13:36:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.829 13:36:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:36.829 13:36:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:16:36.829 13:36:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.829 13:36:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:36.829 13:36:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.829 13:36:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:36.829 13:36:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:16:36.829 13:36:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.829 13:36:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:36.829 13:36:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.829 13:36:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:36.829 13:36:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:16:36.829 13:36:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.829 13:36:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:36.829 13:36:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.829 13:36:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:16:36.829 13:36:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:16:36.829 13:36:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.829 13:36:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:37.089 13:36:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.089 13:36:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:16:37.089 13:36:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:16:37.089 13:36:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:16:37.089 13:36:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:16:37.089 13:36:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:16:37.089 13:36:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:37.089 13:36:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:16:37.089 13:36:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:37.089 13:36:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:16:37.089 13:36:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.089 13:36:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:37.089 [2024-11-20 13:36:36.348405] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:16:37.089 [2024-11-20 13:36:36.350498] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:16:37.089 [2024-11-20 13:36:36.350547] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:16:37.089 [2024-11-20 13:36:36.350581] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:16:37.089 [2024-11-20 13:36:36.350632] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:16:37.089 [2024-11-20 13:36:36.350688] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:16:37.089 [2024-11-20 13:36:36.350711] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:16:37.089 [2024-11-20 13:36:36.350733] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:16:37.089 [2024-11-20 13:36:36.350749] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:37.089 [2024-11-20 13:36:36.350762] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:16:37.089 request: 00:16:37.089 { 00:16:37.089 "name": "raid_bdev1", 00:16:37.089 "raid_level": "concat", 00:16:37.089 "base_bdevs": [ 00:16:37.089 "malloc1", 00:16:37.089 "malloc2", 00:16:37.089 "malloc3", 00:16:37.089 "malloc4" 00:16:37.089 ], 00:16:37.089 "strip_size_kb": 64, 00:16:37.089 "superblock": false, 00:16:37.089 "method": "bdev_raid_create", 00:16:37.089 "req_id": 1 00:16:37.089 } 00:16:37.089 Got JSON-RPC error response 00:16:37.089 response: 00:16:37.089 { 00:16:37.089 "code": -17, 00:16:37.089 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:16:37.089 } 00:16:37.089 13:36:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:16:37.089 13:36:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:16:37.089 13:36:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:37.089 13:36:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:37.089 13:36:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:37.089 13:36:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:37.089 13:36:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:16:37.089 13:36:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.089 13:36:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:37.089 13:36:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.089 13:36:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:16:37.089 13:36:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:16:37.089 13:36:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:37.089 13:36:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.089 13:36:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:37.089 [2024-11-20 13:36:36.416263] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:37.089 [2024-11-20 13:36:36.416324] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:37.089 [2024-11-20 13:36:36.416346] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:16:37.089 [2024-11-20 13:36:36.416360] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:37.090 [2024-11-20 13:36:36.418910] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:37.090 [2024-11-20 13:36:36.418958] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:37.090 [2024-11-20 13:36:36.419046] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:16:37.090 [2024-11-20 13:36:36.419132] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:37.090 pt1 00:16:37.090 13:36:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.090 13:36:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:16:37.090 13:36:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:37.090 13:36:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:37.090 13:36:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:16:37.090 13:36:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:37.090 13:36:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:37.090 13:36:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:37.090 13:36:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:37.090 13:36:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:37.090 13:36:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:37.090 13:36:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:37.090 13:36:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:37.090 13:36:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.090 13:36:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:37.090 13:36:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.090 13:36:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:37.090 "name": "raid_bdev1", 00:16:37.090 "uuid": "5b8a4697-3db1-4381-b507-b1b87fee0e54", 00:16:37.090 "strip_size_kb": 64, 00:16:37.090 "state": "configuring", 00:16:37.090 "raid_level": "concat", 00:16:37.090 "superblock": true, 00:16:37.090 "num_base_bdevs": 4, 00:16:37.090 "num_base_bdevs_discovered": 1, 00:16:37.090 "num_base_bdevs_operational": 4, 00:16:37.090 "base_bdevs_list": [ 00:16:37.090 { 00:16:37.090 "name": "pt1", 00:16:37.090 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:37.090 "is_configured": true, 00:16:37.090 "data_offset": 2048, 00:16:37.090 "data_size": 63488 00:16:37.090 }, 00:16:37.090 { 00:16:37.090 "name": null, 00:16:37.090 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:37.090 "is_configured": false, 00:16:37.090 "data_offset": 2048, 00:16:37.090 "data_size": 63488 00:16:37.090 }, 00:16:37.090 { 00:16:37.090 "name": null, 00:16:37.090 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:37.090 "is_configured": false, 00:16:37.090 "data_offset": 2048, 00:16:37.090 "data_size": 63488 00:16:37.090 }, 00:16:37.090 { 00:16:37.090 "name": null, 00:16:37.090 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:37.090 "is_configured": false, 00:16:37.090 "data_offset": 2048, 00:16:37.090 "data_size": 63488 00:16:37.090 } 00:16:37.090 ] 00:16:37.090 }' 00:16:37.090 13:36:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:37.090 13:36:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:37.659 13:36:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:16:37.659 13:36:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:37.659 13:36:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.659 13:36:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:37.659 [2024-11-20 13:36:36.851707] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:37.659 [2024-11-20 13:36:36.851783] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:37.659 [2024-11-20 13:36:36.851805] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:16:37.659 [2024-11-20 13:36:36.851819] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:37.659 [2024-11-20 13:36:36.852277] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:37.659 [2024-11-20 13:36:36.852310] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:37.659 [2024-11-20 13:36:36.852392] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:37.659 [2024-11-20 13:36:36.852418] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:37.659 pt2 00:16:37.659 13:36:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.659 13:36:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:16:37.659 13:36:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.659 13:36:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:37.659 [2024-11-20 13:36:36.863689] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:16:37.659 13:36:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.660 13:36:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:16:37.660 13:36:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:37.660 13:36:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:37.660 13:36:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:16:37.660 13:36:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:37.660 13:36:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:37.660 13:36:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:37.660 13:36:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:37.660 13:36:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:37.660 13:36:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:37.660 13:36:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:37.660 13:36:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:37.660 13:36:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.660 13:36:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:37.660 13:36:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.660 13:36:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:37.660 "name": "raid_bdev1", 00:16:37.660 "uuid": "5b8a4697-3db1-4381-b507-b1b87fee0e54", 00:16:37.660 "strip_size_kb": 64, 00:16:37.660 "state": "configuring", 00:16:37.660 "raid_level": "concat", 00:16:37.660 "superblock": true, 00:16:37.660 "num_base_bdevs": 4, 00:16:37.660 "num_base_bdevs_discovered": 1, 00:16:37.660 "num_base_bdevs_operational": 4, 00:16:37.660 "base_bdevs_list": [ 00:16:37.660 { 00:16:37.660 "name": "pt1", 00:16:37.660 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:37.660 "is_configured": true, 00:16:37.660 "data_offset": 2048, 00:16:37.660 "data_size": 63488 00:16:37.660 }, 00:16:37.660 { 00:16:37.660 "name": null, 00:16:37.660 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:37.660 "is_configured": false, 00:16:37.660 "data_offset": 0, 00:16:37.660 "data_size": 63488 00:16:37.660 }, 00:16:37.660 { 00:16:37.660 "name": null, 00:16:37.660 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:37.660 "is_configured": false, 00:16:37.660 "data_offset": 2048, 00:16:37.660 "data_size": 63488 00:16:37.660 }, 00:16:37.660 { 00:16:37.660 "name": null, 00:16:37.660 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:37.660 "is_configured": false, 00:16:37.660 "data_offset": 2048, 00:16:37.660 "data_size": 63488 00:16:37.660 } 00:16:37.660 ] 00:16:37.660 }' 00:16:37.660 13:36:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:37.660 13:36:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:37.919 13:36:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:16:37.919 13:36:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:37.919 13:36:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:37.919 13:36:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.919 13:36:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:37.919 [2024-11-20 13:36:37.251169] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:37.919 [2024-11-20 13:36:37.251236] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:37.919 [2024-11-20 13:36:37.251259] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:16:37.919 [2024-11-20 13:36:37.251271] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:37.919 [2024-11-20 13:36:37.251713] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:37.919 [2024-11-20 13:36:37.251743] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:37.919 [2024-11-20 13:36:37.251832] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:37.919 [2024-11-20 13:36:37.251853] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:37.919 pt2 00:16:37.919 13:36:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.919 13:36:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:16:37.919 13:36:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:37.919 13:36:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:37.919 13:36:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.919 13:36:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:37.919 [2024-11-20 13:36:37.263128] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:37.919 [2024-11-20 13:36:37.263177] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:37.919 [2024-11-20 13:36:37.263197] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:16:37.919 [2024-11-20 13:36:37.263208] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:37.919 [2024-11-20 13:36:37.263574] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:37.919 [2024-11-20 13:36:37.263603] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:37.919 [2024-11-20 13:36:37.263673] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:16:37.919 [2024-11-20 13:36:37.263699] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:37.919 pt3 00:16:37.919 13:36:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.920 13:36:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:16:37.920 13:36:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:37.920 13:36:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:16:37.920 13:36:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.920 13:36:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:37.920 [2024-11-20 13:36:37.275081] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:16:37.920 [2024-11-20 13:36:37.275127] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:37.920 [2024-11-20 13:36:37.275146] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:16:37.920 [2024-11-20 13:36:37.275156] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:37.920 [2024-11-20 13:36:37.275520] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:37.920 [2024-11-20 13:36:37.275549] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:16:37.920 [2024-11-20 13:36:37.275612] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:16:37.920 [2024-11-20 13:36:37.275634] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:16:37.920 [2024-11-20 13:36:37.275763] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:16:37.920 [2024-11-20 13:36:37.275772] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:16:37.920 [2024-11-20 13:36:37.276016] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:16:37.920 [2024-11-20 13:36:37.276185] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:16:37.920 [2024-11-20 13:36:37.276200] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:16:37.920 [2024-11-20 13:36:37.276333] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:37.920 pt4 00:16:37.920 13:36:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.920 13:36:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:16:37.920 13:36:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:37.920 13:36:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:16:37.920 13:36:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:37.920 13:36:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:37.920 13:36:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:16:37.920 13:36:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:37.920 13:36:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:37.920 13:36:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:37.920 13:36:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:37.920 13:36:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:37.920 13:36:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:37.920 13:36:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:37.920 13:36:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:37.920 13:36:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.920 13:36:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:37.920 13:36:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.920 13:36:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:37.920 "name": "raid_bdev1", 00:16:37.920 "uuid": "5b8a4697-3db1-4381-b507-b1b87fee0e54", 00:16:37.920 "strip_size_kb": 64, 00:16:37.920 "state": "online", 00:16:37.920 "raid_level": "concat", 00:16:37.920 "superblock": true, 00:16:37.920 "num_base_bdevs": 4, 00:16:37.920 "num_base_bdevs_discovered": 4, 00:16:37.920 "num_base_bdevs_operational": 4, 00:16:37.920 "base_bdevs_list": [ 00:16:37.920 { 00:16:37.920 "name": "pt1", 00:16:37.920 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:37.920 "is_configured": true, 00:16:37.920 "data_offset": 2048, 00:16:37.920 "data_size": 63488 00:16:37.920 }, 00:16:37.920 { 00:16:37.920 "name": "pt2", 00:16:37.920 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:37.920 "is_configured": true, 00:16:37.920 "data_offset": 2048, 00:16:37.920 "data_size": 63488 00:16:37.920 }, 00:16:37.920 { 00:16:37.920 "name": "pt3", 00:16:37.920 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:37.920 "is_configured": true, 00:16:37.920 "data_offset": 2048, 00:16:37.920 "data_size": 63488 00:16:37.920 }, 00:16:37.920 { 00:16:37.920 "name": "pt4", 00:16:37.920 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:37.920 "is_configured": true, 00:16:37.920 "data_offset": 2048, 00:16:37.920 "data_size": 63488 00:16:37.920 } 00:16:37.920 ] 00:16:37.920 }' 00:16:37.920 13:36:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:37.920 13:36:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:38.178 13:36:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:16:38.178 13:36:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:16:38.178 13:36:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:38.178 13:36:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:38.178 13:36:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:16:38.178 13:36:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:38.178 13:36:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:38.178 13:36:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:38.178 13:36:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.178 13:36:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:38.178 [2024-11-20 13:36:37.662886] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:38.436 13:36:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.436 13:36:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:38.436 "name": "raid_bdev1", 00:16:38.436 "aliases": [ 00:16:38.436 "5b8a4697-3db1-4381-b507-b1b87fee0e54" 00:16:38.436 ], 00:16:38.436 "product_name": "Raid Volume", 00:16:38.436 "block_size": 512, 00:16:38.436 "num_blocks": 253952, 00:16:38.436 "uuid": "5b8a4697-3db1-4381-b507-b1b87fee0e54", 00:16:38.436 "assigned_rate_limits": { 00:16:38.436 "rw_ios_per_sec": 0, 00:16:38.436 "rw_mbytes_per_sec": 0, 00:16:38.436 "r_mbytes_per_sec": 0, 00:16:38.436 "w_mbytes_per_sec": 0 00:16:38.436 }, 00:16:38.436 "claimed": false, 00:16:38.436 "zoned": false, 00:16:38.436 "supported_io_types": { 00:16:38.436 "read": true, 00:16:38.436 "write": true, 00:16:38.436 "unmap": true, 00:16:38.436 "flush": true, 00:16:38.436 "reset": true, 00:16:38.436 "nvme_admin": false, 00:16:38.436 "nvme_io": false, 00:16:38.436 "nvme_io_md": false, 00:16:38.436 "write_zeroes": true, 00:16:38.436 "zcopy": false, 00:16:38.436 "get_zone_info": false, 00:16:38.436 "zone_management": false, 00:16:38.436 "zone_append": false, 00:16:38.436 "compare": false, 00:16:38.436 "compare_and_write": false, 00:16:38.436 "abort": false, 00:16:38.436 "seek_hole": false, 00:16:38.436 "seek_data": false, 00:16:38.436 "copy": false, 00:16:38.436 "nvme_iov_md": false 00:16:38.436 }, 00:16:38.436 "memory_domains": [ 00:16:38.436 { 00:16:38.436 "dma_device_id": "system", 00:16:38.436 "dma_device_type": 1 00:16:38.436 }, 00:16:38.436 { 00:16:38.436 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:38.436 "dma_device_type": 2 00:16:38.436 }, 00:16:38.436 { 00:16:38.436 "dma_device_id": "system", 00:16:38.436 "dma_device_type": 1 00:16:38.436 }, 00:16:38.436 { 00:16:38.436 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:38.436 "dma_device_type": 2 00:16:38.436 }, 00:16:38.436 { 00:16:38.436 "dma_device_id": "system", 00:16:38.436 "dma_device_type": 1 00:16:38.436 }, 00:16:38.436 { 00:16:38.436 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:38.436 "dma_device_type": 2 00:16:38.436 }, 00:16:38.436 { 00:16:38.436 "dma_device_id": "system", 00:16:38.436 "dma_device_type": 1 00:16:38.436 }, 00:16:38.436 { 00:16:38.436 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:38.436 "dma_device_type": 2 00:16:38.436 } 00:16:38.436 ], 00:16:38.436 "driver_specific": { 00:16:38.436 "raid": { 00:16:38.436 "uuid": "5b8a4697-3db1-4381-b507-b1b87fee0e54", 00:16:38.436 "strip_size_kb": 64, 00:16:38.436 "state": "online", 00:16:38.436 "raid_level": "concat", 00:16:38.436 "superblock": true, 00:16:38.436 "num_base_bdevs": 4, 00:16:38.436 "num_base_bdevs_discovered": 4, 00:16:38.436 "num_base_bdevs_operational": 4, 00:16:38.436 "base_bdevs_list": [ 00:16:38.436 { 00:16:38.436 "name": "pt1", 00:16:38.436 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:38.436 "is_configured": true, 00:16:38.436 "data_offset": 2048, 00:16:38.436 "data_size": 63488 00:16:38.436 }, 00:16:38.436 { 00:16:38.436 "name": "pt2", 00:16:38.436 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:38.436 "is_configured": true, 00:16:38.436 "data_offset": 2048, 00:16:38.436 "data_size": 63488 00:16:38.436 }, 00:16:38.437 { 00:16:38.437 "name": "pt3", 00:16:38.437 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:38.437 "is_configured": true, 00:16:38.437 "data_offset": 2048, 00:16:38.437 "data_size": 63488 00:16:38.437 }, 00:16:38.437 { 00:16:38.437 "name": "pt4", 00:16:38.437 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:38.437 "is_configured": true, 00:16:38.437 "data_offset": 2048, 00:16:38.437 "data_size": 63488 00:16:38.437 } 00:16:38.437 ] 00:16:38.437 } 00:16:38.437 } 00:16:38.437 }' 00:16:38.437 13:36:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:38.437 13:36:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:16:38.437 pt2 00:16:38.437 pt3 00:16:38.437 pt4' 00:16:38.437 13:36:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:38.437 13:36:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:38.437 13:36:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:38.437 13:36:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:38.437 13:36:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:16:38.437 13:36:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.437 13:36:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:38.437 13:36:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.437 13:36:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:38.437 13:36:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:38.437 13:36:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:38.437 13:36:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:16:38.437 13:36:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:38.437 13:36:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.437 13:36:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:38.437 13:36:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.437 13:36:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:38.437 13:36:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:38.437 13:36:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:38.437 13:36:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:38.437 13:36:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:16:38.437 13:36:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.437 13:36:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:38.437 13:36:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.437 13:36:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:38.437 13:36:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:38.437 13:36:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:38.437 13:36:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:16:38.437 13:36:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.437 13:36:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:38.437 13:36:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:38.697 13:36:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.697 13:36:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:38.697 13:36:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:38.697 13:36:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:16:38.697 13:36:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:38.697 13:36:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.697 13:36:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:38.697 [2024-11-20 13:36:37.958733] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:38.697 13:36:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.697 13:36:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 5b8a4697-3db1-4381-b507-b1b87fee0e54 '!=' 5b8a4697-3db1-4381-b507-b1b87fee0e54 ']' 00:16:38.697 13:36:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:16:38.697 13:36:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:38.697 13:36:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:16:38.697 13:36:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 72374 00:16:38.697 13:36:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 72374 ']' 00:16:38.697 13:36:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 72374 00:16:38.697 13:36:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:16:38.697 13:36:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:38.697 13:36:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72374 00:16:38.697 13:36:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:38.697 13:36:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:38.697 killing process with pid 72374 00:16:38.697 13:36:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72374' 00:16:38.697 13:36:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 72374 00:16:38.697 13:36:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 72374 00:16:38.697 [2024-11-20 13:36:38.037689] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:38.697 [2024-11-20 13:36:38.037778] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:38.697 [2024-11-20 13:36:38.037854] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:38.697 [2024-11-20 13:36:38.037865] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:16:38.999 [2024-11-20 13:36:38.444652] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:40.377 13:36:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:16:40.377 00:16:40.377 real 0m5.398s 00:16:40.377 user 0m7.702s 00:16:40.377 sys 0m0.998s 00:16:40.377 13:36:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:40.377 ************************************ 00:16:40.377 END TEST raid_superblock_test 00:16:40.377 ************************************ 00:16:40.377 13:36:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:40.377 13:36:39 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 4 read 00:16:40.377 13:36:39 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:16:40.377 13:36:39 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:40.377 13:36:39 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:40.377 ************************************ 00:16:40.377 START TEST raid_read_error_test 00:16:40.377 ************************************ 00:16:40.377 13:36:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 4 read 00:16:40.377 13:36:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:16:40.377 13:36:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:16:40.377 13:36:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:16:40.377 13:36:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:16:40.377 13:36:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:16:40.377 13:36:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:16:40.377 13:36:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:16:40.377 13:36:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:16:40.377 13:36:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:16:40.377 13:36:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:16:40.377 13:36:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:16:40.377 13:36:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:16:40.377 13:36:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:16:40.377 13:36:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:16:40.377 13:36:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:16:40.377 13:36:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:16:40.377 13:36:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:16:40.377 13:36:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:16:40.377 13:36:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:16:40.377 13:36:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:16:40.377 13:36:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:16:40.377 13:36:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:16:40.377 13:36:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:16:40.377 13:36:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:16:40.377 13:36:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:16:40.377 13:36:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:16:40.377 13:36:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:16:40.377 13:36:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:16:40.377 13:36:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.rygF5m1Gfx 00:16:40.377 13:36:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=72634 00:16:40.378 13:36:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 72634 00:16:40.378 13:36:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:16:40.378 13:36:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 72634 ']' 00:16:40.378 13:36:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:40.378 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:40.378 13:36:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:40.378 13:36:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:40.378 13:36:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:40.378 13:36:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:40.378 [2024-11-20 13:36:39.773799] Starting SPDK v25.01-pre git sha1 82b85d9ca / DPDK 24.03.0 initialization... 00:16:40.378 [2024-11-20 13:36:39.774501] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72634 ] 00:16:40.637 [2024-11-20 13:36:39.954006] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:40.637 [2024-11-20 13:36:40.073115] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:40.896 [2024-11-20 13:36:40.283658] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:40.896 [2024-11-20 13:36:40.283698] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:41.155 13:36:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:41.155 13:36:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:16:41.155 13:36:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:16:41.155 13:36:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:16:41.155 13:36:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.155 13:36:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:41.436 BaseBdev1_malloc 00:16:41.436 13:36:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.436 13:36:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:16:41.436 13:36:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.436 13:36:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:41.436 true 00:16:41.436 13:36:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.436 13:36:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:16:41.436 13:36:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.436 13:36:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:41.436 [2024-11-20 13:36:40.673101] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:16:41.436 [2024-11-20 13:36:40.673156] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:41.436 [2024-11-20 13:36:40.673179] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:16:41.436 [2024-11-20 13:36:40.673193] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:41.436 [2024-11-20 13:36:40.675494] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:41.436 [2024-11-20 13:36:40.675539] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:41.436 BaseBdev1 00:16:41.436 13:36:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.436 13:36:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:16:41.436 13:36:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:16:41.436 13:36:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.437 13:36:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:41.437 BaseBdev2_malloc 00:16:41.437 13:36:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.437 13:36:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:16:41.437 13:36:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.437 13:36:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:41.437 true 00:16:41.437 13:36:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.437 13:36:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:16:41.437 13:36:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.437 13:36:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:41.437 [2024-11-20 13:36:40.741582] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:16:41.437 [2024-11-20 13:36:40.741637] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:41.437 [2024-11-20 13:36:40.741655] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:16:41.437 [2024-11-20 13:36:40.741669] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:41.437 [2024-11-20 13:36:40.743981] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:41.437 [2024-11-20 13:36:40.744023] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:41.437 BaseBdev2 00:16:41.437 13:36:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.437 13:36:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:16:41.437 13:36:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:16:41.437 13:36:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.437 13:36:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:41.437 BaseBdev3_malloc 00:16:41.437 13:36:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.437 13:36:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:16:41.437 13:36:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.437 13:36:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:41.437 true 00:16:41.437 13:36:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.437 13:36:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:16:41.437 13:36:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.437 13:36:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:41.437 [2024-11-20 13:36:40.816908] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:16:41.437 [2024-11-20 13:36:40.816962] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:41.437 [2024-11-20 13:36:40.816981] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:16:41.437 [2024-11-20 13:36:40.816995] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:41.437 [2024-11-20 13:36:40.819345] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:41.437 [2024-11-20 13:36:40.819387] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:16:41.437 BaseBdev3 00:16:41.437 13:36:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.437 13:36:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:16:41.437 13:36:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:16:41.437 13:36:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.437 13:36:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:41.437 BaseBdev4_malloc 00:16:41.437 13:36:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.437 13:36:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:16:41.437 13:36:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.437 13:36:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:41.437 true 00:16:41.437 13:36:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.437 13:36:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:16:41.437 13:36:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.437 13:36:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:41.437 [2024-11-20 13:36:40.885748] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:16:41.437 [2024-11-20 13:36:40.885943] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:41.437 [2024-11-20 13:36:40.885972] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:16:41.437 [2024-11-20 13:36:40.885988] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:41.437 [2024-11-20 13:36:40.888526] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:41.437 [2024-11-20 13:36:40.888576] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:16:41.437 BaseBdev4 00:16:41.437 13:36:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.437 13:36:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:16:41.437 13:36:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.437 13:36:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:41.437 [2024-11-20 13:36:40.897800] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:41.437 [2024-11-20 13:36:40.899845] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:41.437 [2024-11-20 13:36:40.899918] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:41.437 [2024-11-20 13:36:40.899979] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:41.437 [2024-11-20 13:36:40.900221] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:16:41.437 [2024-11-20 13:36:40.900237] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:16:41.437 [2024-11-20 13:36:40.900490] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:16:41.437 [2024-11-20 13:36:40.900642] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:16:41.437 [2024-11-20 13:36:40.900655] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:16:41.437 [2024-11-20 13:36:40.900805] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:41.437 13:36:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.437 13:36:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:16:41.437 13:36:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:41.437 13:36:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:41.437 13:36:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:16:41.437 13:36:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:41.438 13:36:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:41.438 13:36:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:41.438 13:36:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:41.438 13:36:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:41.438 13:36:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:41.438 13:36:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:41.438 13:36:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:41.438 13:36:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.438 13:36:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:41.695 13:36:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.695 13:36:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:41.695 "name": "raid_bdev1", 00:16:41.695 "uuid": "03077118-0796-48f0-bf08-dc0356996f6c", 00:16:41.695 "strip_size_kb": 64, 00:16:41.695 "state": "online", 00:16:41.695 "raid_level": "concat", 00:16:41.695 "superblock": true, 00:16:41.695 "num_base_bdevs": 4, 00:16:41.695 "num_base_bdevs_discovered": 4, 00:16:41.695 "num_base_bdevs_operational": 4, 00:16:41.695 "base_bdevs_list": [ 00:16:41.695 { 00:16:41.695 "name": "BaseBdev1", 00:16:41.695 "uuid": "d686d4e7-1833-5c53-a57b-22ad9bf1a521", 00:16:41.695 "is_configured": true, 00:16:41.695 "data_offset": 2048, 00:16:41.695 "data_size": 63488 00:16:41.695 }, 00:16:41.695 { 00:16:41.695 "name": "BaseBdev2", 00:16:41.695 "uuid": "4f33b87f-f2c6-5455-85bd-0bd712c03a85", 00:16:41.695 "is_configured": true, 00:16:41.695 "data_offset": 2048, 00:16:41.695 "data_size": 63488 00:16:41.695 }, 00:16:41.695 { 00:16:41.695 "name": "BaseBdev3", 00:16:41.695 "uuid": "2fe1edae-1535-5396-ac71-1bea975214ac", 00:16:41.695 "is_configured": true, 00:16:41.695 "data_offset": 2048, 00:16:41.695 "data_size": 63488 00:16:41.695 }, 00:16:41.695 { 00:16:41.695 "name": "BaseBdev4", 00:16:41.695 "uuid": "ef8fc520-1eb8-5a98-906f-939d418d4442", 00:16:41.695 "is_configured": true, 00:16:41.695 "data_offset": 2048, 00:16:41.695 "data_size": 63488 00:16:41.695 } 00:16:41.695 ] 00:16:41.695 }' 00:16:41.695 13:36:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:41.695 13:36:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:41.953 13:36:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:16:41.953 13:36:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:16:41.953 [2024-11-20 13:36:41.410494] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:16:42.887 13:36:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:16:42.887 13:36:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.887 13:36:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.887 13:36:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.887 13:36:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:16:42.887 13:36:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:16:42.887 13:36:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:16:42.887 13:36:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:16:42.887 13:36:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:42.887 13:36:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:42.887 13:36:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:16:42.887 13:36:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:42.887 13:36:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:42.887 13:36:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:42.887 13:36:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:42.887 13:36:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:42.887 13:36:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:42.887 13:36:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:42.887 13:36:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.887 13:36:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:42.887 13:36:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.887 13:36:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.146 13:36:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:43.146 "name": "raid_bdev1", 00:16:43.146 "uuid": "03077118-0796-48f0-bf08-dc0356996f6c", 00:16:43.146 "strip_size_kb": 64, 00:16:43.146 "state": "online", 00:16:43.146 "raid_level": "concat", 00:16:43.146 "superblock": true, 00:16:43.146 "num_base_bdevs": 4, 00:16:43.146 "num_base_bdevs_discovered": 4, 00:16:43.146 "num_base_bdevs_operational": 4, 00:16:43.146 "base_bdevs_list": [ 00:16:43.146 { 00:16:43.146 "name": "BaseBdev1", 00:16:43.146 "uuid": "d686d4e7-1833-5c53-a57b-22ad9bf1a521", 00:16:43.146 "is_configured": true, 00:16:43.146 "data_offset": 2048, 00:16:43.146 "data_size": 63488 00:16:43.146 }, 00:16:43.146 { 00:16:43.146 "name": "BaseBdev2", 00:16:43.146 "uuid": "4f33b87f-f2c6-5455-85bd-0bd712c03a85", 00:16:43.146 "is_configured": true, 00:16:43.146 "data_offset": 2048, 00:16:43.146 "data_size": 63488 00:16:43.146 }, 00:16:43.146 { 00:16:43.146 "name": "BaseBdev3", 00:16:43.146 "uuid": "2fe1edae-1535-5396-ac71-1bea975214ac", 00:16:43.146 "is_configured": true, 00:16:43.146 "data_offset": 2048, 00:16:43.146 "data_size": 63488 00:16:43.146 }, 00:16:43.146 { 00:16:43.146 "name": "BaseBdev4", 00:16:43.146 "uuid": "ef8fc520-1eb8-5a98-906f-939d418d4442", 00:16:43.146 "is_configured": true, 00:16:43.146 "data_offset": 2048, 00:16:43.146 "data_size": 63488 00:16:43.146 } 00:16:43.146 ] 00:16:43.146 }' 00:16:43.146 13:36:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:43.146 13:36:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:43.405 13:36:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:43.405 13:36:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.405 13:36:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:43.405 [2024-11-20 13:36:42.732181] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:43.405 [2024-11-20 13:36:42.732238] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:43.405 [2024-11-20 13:36:42.735156] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:43.405 [2024-11-20 13:36:42.735250] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:43.405 [2024-11-20 13:36:42.735309] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:43.405 [2024-11-20 13:36:42.735326] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:16:43.405 { 00:16:43.405 "results": [ 00:16:43.405 { 00:16:43.405 "job": "raid_bdev1", 00:16:43.405 "core_mask": "0x1", 00:16:43.405 "workload": "randrw", 00:16:43.405 "percentage": 50, 00:16:43.405 "status": "finished", 00:16:43.405 "queue_depth": 1, 00:16:43.405 "io_size": 131072, 00:16:43.405 "runtime": 1.321052, 00:16:43.405 "iops": 15030.44543288228, 00:16:43.405 "mibps": 1878.805679110285, 00:16:43.405 "io_failed": 1, 00:16:43.405 "io_timeout": 0, 00:16:43.405 "avg_latency_us": 92.31946020472078, 00:16:43.405 "min_latency_us": 26.936546184738955, 00:16:43.405 "max_latency_us": 1434.4224899598394 00:16:43.405 } 00:16:43.405 ], 00:16:43.405 "core_count": 1 00:16:43.405 } 00:16:43.405 13:36:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.405 13:36:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 72634 00:16:43.405 13:36:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 72634 ']' 00:16:43.405 13:36:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 72634 00:16:43.405 13:36:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:16:43.405 13:36:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:43.405 13:36:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72634 00:16:43.405 killing process with pid 72634 00:16:43.405 13:36:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:43.406 13:36:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:43.406 13:36:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72634' 00:16:43.406 13:36:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 72634 00:16:43.406 13:36:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 72634 00:16:43.406 [2024-11-20 13:36:42.781562] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:43.973 [2024-11-20 13:36:43.153771] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:44.982 13:36:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.rygF5m1Gfx 00:16:44.982 13:36:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:16:44.982 13:36:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:16:45.242 ************************************ 00:16:45.242 END TEST raid_read_error_test 00:16:45.242 ************************************ 00:16:45.242 13:36:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.76 00:16:45.242 13:36:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:16:45.242 13:36:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:45.242 13:36:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:16:45.242 13:36:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.76 != \0\.\0\0 ]] 00:16:45.242 00:16:45.242 real 0m4.811s 00:16:45.242 user 0m5.554s 00:16:45.242 sys 0m0.626s 00:16:45.242 13:36:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:45.242 13:36:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:45.242 13:36:44 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 4 write 00:16:45.242 13:36:44 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:16:45.242 13:36:44 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:45.242 13:36:44 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:45.242 ************************************ 00:16:45.242 START TEST raid_write_error_test 00:16:45.242 ************************************ 00:16:45.242 13:36:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 4 write 00:16:45.242 13:36:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:16:45.242 13:36:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:16:45.243 13:36:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:16:45.243 13:36:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:16:45.243 13:36:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:16:45.243 13:36:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:16:45.243 13:36:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:16:45.243 13:36:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:16:45.243 13:36:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:16:45.243 13:36:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:16:45.243 13:36:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:16:45.243 13:36:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:16:45.243 13:36:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:16:45.243 13:36:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:16:45.243 13:36:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:16:45.243 13:36:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:16:45.243 13:36:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:16:45.243 13:36:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:16:45.243 13:36:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:16:45.243 13:36:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:16:45.243 13:36:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:16:45.243 13:36:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:16:45.243 13:36:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:16:45.243 13:36:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:16:45.243 13:36:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:16:45.243 13:36:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:16:45.243 13:36:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:16:45.243 13:36:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:16:45.243 13:36:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.6nG4W7OCaD 00:16:45.243 13:36:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=72780 00:16:45.243 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:45.243 13:36:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 72780 00:16:45.243 13:36:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 72780 ']' 00:16:45.243 13:36:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:45.243 13:36:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:45.243 13:36:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:45.243 13:36:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:16:45.243 13:36:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:45.243 13:36:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:45.243 [2024-11-20 13:36:44.656897] Starting SPDK v25.01-pre git sha1 82b85d9ca / DPDK 24.03.0 initialization... 00:16:45.243 [2024-11-20 13:36:44.657013] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72780 ] 00:16:45.501 [2024-11-20 13:36:44.836208] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:45.501 [2024-11-20 13:36:44.984820] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:45.760 [2024-11-20 13:36:45.236511] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:45.760 [2024-11-20 13:36:45.236607] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:46.019 13:36:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:46.019 13:36:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:16:46.019 13:36:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:16:46.019 13:36:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:16:46.019 13:36:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.019 13:36:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:46.278 BaseBdev1_malloc 00:16:46.278 13:36:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.278 13:36:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:16:46.278 13:36:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.278 13:36:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:46.278 true 00:16:46.278 13:36:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.278 13:36:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:16:46.278 13:36:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.278 13:36:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:46.278 [2024-11-20 13:36:45.543115] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:16:46.278 [2024-11-20 13:36:45.543194] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:46.278 [2024-11-20 13:36:45.543220] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:16:46.278 [2024-11-20 13:36:45.543236] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:46.278 [2024-11-20 13:36:45.545961] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:46.278 [2024-11-20 13:36:45.546007] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:46.278 BaseBdev1 00:16:46.278 13:36:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.278 13:36:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:16:46.278 13:36:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:16:46.278 13:36:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.278 13:36:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:46.278 BaseBdev2_malloc 00:16:46.278 13:36:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.278 13:36:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:16:46.278 13:36:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.278 13:36:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:46.278 true 00:16:46.278 13:36:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.278 13:36:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:16:46.278 13:36:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.278 13:36:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:46.278 [2024-11-20 13:36:45.606574] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:16:46.278 [2024-11-20 13:36:45.607439] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:46.278 [2024-11-20 13:36:45.607474] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:16:46.278 [2024-11-20 13:36:45.607490] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:46.278 [2024-11-20 13:36:45.610236] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:46.278 [2024-11-20 13:36:45.610278] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:46.278 BaseBdev2 00:16:46.278 13:36:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.278 13:36:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:16:46.278 13:36:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:16:46.278 13:36:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.278 13:36:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:46.278 BaseBdev3_malloc 00:16:46.278 13:36:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.278 13:36:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:16:46.278 13:36:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.278 13:36:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:46.278 true 00:16:46.278 13:36:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.278 13:36:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:16:46.278 13:36:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.278 13:36:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:46.278 [2024-11-20 13:36:45.682647] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:16:46.278 [2024-11-20 13:36:45.682720] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:46.278 [2024-11-20 13:36:45.682743] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:16:46.278 [2024-11-20 13:36:45.682758] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:46.278 [2024-11-20 13:36:45.685527] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:46.279 [2024-11-20 13:36:45.685575] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:16:46.279 BaseBdev3 00:16:46.279 13:36:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.279 13:36:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:16:46.279 13:36:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:16:46.279 13:36:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.279 13:36:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:46.279 BaseBdev4_malloc 00:16:46.279 13:36:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.279 13:36:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:16:46.279 13:36:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.279 13:36:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:46.279 true 00:16:46.279 13:36:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.279 13:36:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:16:46.279 13:36:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.279 13:36:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:46.279 [2024-11-20 13:36:45.750719] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:16:46.279 [2024-11-20 13:36:45.750973] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:46.279 [2024-11-20 13:36:45.751033] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:16:46.279 [2024-11-20 13:36:45.751144] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:46.279 [2024-11-20 13:36:45.753938] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:46.279 [2024-11-20 13:36:45.754118] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:16:46.279 BaseBdev4 00:16:46.279 13:36:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.279 13:36:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:16:46.279 13:36:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.279 13:36:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:46.537 [2024-11-20 13:36:45.762969] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:46.537 [2024-11-20 13:36:45.765521] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:46.537 [2024-11-20 13:36:45.765739] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:46.537 [2024-11-20 13:36:45.765849] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:46.537 [2024-11-20 13:36:45.766186] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:16:46.538 [2024-11-20 13:36:45.766239] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:16:46.538 [2024-11-20 13:36:45.766670] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:16:46.538 [2024-11-20 13:36:45.766985] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:16:46.538 [2024-11-20 13:36:45.767111] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:16:46.538 [2024-11-20 13:36:45.767477] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:46.538 13:36:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.538 13:36:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:16:46.538 13:36:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:46.538 13:36:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:46.538 13:36:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:16:46.538 13:36:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:46.538 13:36:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:46.538 13:36:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:46.538 13:36:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:46.538 13:36:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:46.538 13:36:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:46.538 13:36:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:46.538 13:36:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:46.538 13:36:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.538 13:36:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:46.538 13:36:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.538 13:36:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:46.538 "name": "raid_bdev1", 00:16:46.538 "uuid": "af8b08ca-b555-4159-b7bd-d9a972b34b77", 00:16:46.538 "strip_size_kb": 64, 00:16:46.538 "state": "online", 00:16:46.538 "raid_level": "concat", 00:16:46.538 "superblock": true, 00:16:46.538 "num_base_bdevs": 4, 00:16:46.538 "num_base_bdevs_discovered": 4, 00:16:46.538 "num_base_bdevs_operational": 4, 00:16:46.538 "base_bdevs_list": [ 00:16:46.538 { 00:16:46.538 "name": "BaseBdev1", 00:16:46.538 "uuid": "8d740edd-a82f-5f34-abcb-dbe7e8c319c9", 00:16:46.538 "is_configured": true, 00:16:46.538 "data_offset": 2048, 00:16:46.538 "data_size": 63488 00:16:46.538 }, 00:16:46.538 { 00:16:46.538 "name": "BaseBdev2", 00:16:46.538 "uuid": "b614e294-a9fd-58a8-af6d-bed6fac07f69", 00:16:46.538 "is_configured": true, 00:16:46.538 "data_offset": 2048, 00:16:46.538 "data_size": 63488 00:16:46.538 }, 00:16:46.538 { 00:16:46.538 "name": "BaseBdev3", 00:16:46.538 "uuid": "22635bd0-5dc2-5b5a-bc96-8f8dfcf827bd", 00:16:46.538 "is_configured": true, 00:16:46.538 "data_offset": 2048, 00:16:46.538 "data_size": 63488 00:16:46.538 }, 00:16:46.538 { 00:16:46.538 "name": "BaseBdev4", 00:16:46.538 "uuid": "7c3a33cc-3f80-5f34-ba06-d2052566c063", 00:16:46.538 "is_configured": true, 00:16:46.538 "data_offset": 2048, 00:16:46.538 "data_size": 63488 00:16:46.538 } 00:16:46.538 ] 00:16:46.538 }' 00:16:46.538 13:36:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:46.538 13:36:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:46.796 13:36:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:16:46.796 13:36:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:16:46.796 [2024-11-20 13:36:46.272227] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:16:47.730 13:36:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:16:47.730 13:36:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.730 13:36:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:47.730 13:36:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.730 13:36:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:16:47.730 13:36:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:16:47.730 13:36:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:16:47.730 13:36:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:16:47.730 13:36:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:47.730 13:36:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:47.730 13:36:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:16:47.731 13:36:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:47.731 13:36:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:47.731 13:36:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:47.731 13:36:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:47.731 13:36:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:47.731 13:36:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:47.731 13:36:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:47.731 13:36:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:47.731 13:36:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.731 13:36:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:47.989 13:36:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.989 13:36:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:47.989 "name": "raid_bdev1", 00:16:47.989 "uuid": "af8b08ca-b555-4159-b7bd-d9a972b34b77", 00:16:47.989 "strip_size_kb": 64, 00:16:47.989 "state": "online", 00:16:47.989 "raid_level": "concat", 00:16:47.989 "superblock": true, 00:16:47.989 "num_base_bdevs": 4, 00:16:47.989 "num_base_bdevs_discovered": 4, 00:16:47.989 "num_base_bdevs_operational": 4, 00:16:47.989 "base_bdevs_list": [ 00:16:47.989 { 00:16:47.989 "name": "BaseBdev1", 00:16:47.989 "uuid": "8d740edd-a82f-5f34-abcb-dbe7e8c319c9", 00:16:47.989 "is_configured": true, 00:16:47.989 "data_offset": 2048, 00:16:47.989 "data_size": 63488 00:16:47.989 }, 00:16:47.989 { 00:16:47.989 "name": "BaseBdev2", 00:16:47.989 "uuid": "b614e294-a9fd-58a8-af6d-bed6fac07f69", 00:16:47.989 "is_configured": true, 00:16:47.989 "data_offset": 2048, 00:16:47.989 "data_size": 63488 00:16:47.989 }, 00:16:47.989 { 00:16:47.989 "name": "BaseBdev3", 00:16:47.989 "uuid": "22635bd0-5dc2-5b5a-bc96-8f8dfcf827bd", 00:16:47.989 "is_configured": true, 00:16:47.989 "data_offset": 2048, 00:16:47.989 "data_size": 63488 00:16:47.989 }, 00:16:47.989 { 00:16:47.989 "name": "BaseBdev4", 00:16:47.989 "uuid": "7c3a33cc-3f80-5f34-ba06-d2052566c063", 00:16:47.989 "is_configured": true, 00:16:47.989 "data_offset": 2048, 00:16:47.989 "data_size": 63488 00:16:47.989 } 00:16:47.989 ] 00:16:47.989 }' 00:16:47.989 13:36:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:47.989 13:36:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:48.249 13:36:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:48.249 13:36:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.249 13:36:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:48.249 [2024-11-20 13:36:47.561769] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:48.249 [2024-11-20 13:36:47.561827] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:48.249 [2024-11-20 13:36:47.564599] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:48.249 [2024-11-20 13:36:47.564681] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:48.249 [2024-11-20 13:36:47.564732] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:48.249 [2024-11-20 13:36:47.564749] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:16:48.249 { 00:16:48.249 "results": [ 00:16:48.249 { 00:16:48.249 "job": "raid_bdev1", 00:16:48.249 "core_mask": "0x1", 00:16:48.249 "workload": "randrw", 00:16:48.249 "percentage": 50, 00:16:48.249 "status": "finished", 00:16:48.249 "queue_depth": 1, 00:16:48.249 "io_size": 131072, 00:16:48.249 "runtime": 1.28912, 00:16:48.249 "iops": 13035.248851929999, 00:16:48.249 "mibps": 1629.4061064912498, 00:16:48.249 "io_failed": 1, 00:16:48.249 "io_timeout": 0, 00:16:48.249 "avg_latency_us": 107.60177791797958, 00:16:48.249 "min_latency_us": 27.142168674698794, 00:16:48.249 "max_latency_us": 1572.6008032128514 00:16:48.249 } 00:16:48.249 ], 00:16:48.249 "core_count": 1 00:16:48.249 } 00:16:48.249 13:36:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.249 13:36:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 72780 00:16:48.249 13:36:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 72780 ']' 00:16:48.249 13:36:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 72780 00:16:48.249 13:36:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:16:48.249 13:36:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:48.249 13:36:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72780 00:16:48.249 13:36:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:48.249 13:36:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:48.249 13:36:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72780' 00:16:48.249 killing process with pid 72780 00:16:48.249 13:36:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 72780 00:16:48.249 13:36:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 72780 00:16:48.249 [2024-11-20 13:36:47.610723] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:48.508 [2024-11-20 13:36:47.973195] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:49.925 13:36:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:16:49.925 13:36:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.6nG4W7OCaD 00:16:49.925 13:36:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:16:49.925 13:36:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.78 00:16:49.925 13:36:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:16:49.925 13:36:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:49.925 13:36:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:16:49.925 13:36:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.78 != \0\.\0\0 ]] 00:16:49.925 00:16:49.925 real 0m4.712s 00:16:49.925 user 0m5.312s 00:16:49.925 sys 0m0.696s 00:16:49.925 ************************************ 00:16:49.925 END TEST raid_write_error_test 00:16:49.925 ************************************ 00:16:49.925 13:36:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:49.925 13:36:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:49.925 13:36:49 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:16:49.925 13:36:49 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 4 false 00:16:49.925 13:36:49 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:16:49.925 13:36:49 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:49.925 13:36:49 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:49.925 ************************************ 00:16:49.925 START TEST raid_state_function_test 00:16:49.925 ************************************ 00:16:49.925 13:36:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 4 false 00:16:49.925 13:36:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:16:49.925 13:36:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:16:49.925 13:36:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:16:49.925 13:36:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:16:49.925 13:36:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:16:49.925 13:36:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:49.925 13:36:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:16:49.925 13:36:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:49.925 13:36:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:49.925 13:36:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:16:49.925 13:36:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:49.925 13:36:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:49.925 13:36:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:16:49.925 13:36:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:49.925 13:36:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:49.925 13:36:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:16:49.925 13:36:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:49.925 13:36:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:49.925 13:36:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:16:49.925 13:36:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:16:49.925 13:36:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:16:49.925 13:36:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:16:49.925 13:36:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:16:49.925 13:36:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:16:49.925 13:36:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:16:49.925 13:36:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:16:49.925 13:36:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:16:49.925 13:36:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:16:49.925 Process raid pid: 72929 00:16:49.925 13:36:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=72929 00:16:49.925 13:36:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 72929' 00:16:49.925 13:36:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 72929 00:16:49.925 13:36:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 72929 ']' 00:16:49.925 13:36:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:49.925 13:36:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:16:49.925 13:36:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:49.925 13:36:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:49.925 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:49.925 13:36:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:49.925 13:36:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:50.184 [2024-11-20 13:36:49.433932] Starting SPDK v25.01-pre git sha1 82b85d9ca / DPDK 24.03.0 initialization... 00:16:50.184 [2024-11-20 13:36:49.434249] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:50.184 [2024-11-20 13:36:49.634351] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:50.441 [2024-11-20 13:36:49.755681] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:50.700 [2024-11-20 13:36:49.969251] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:50.700 [2024-11-20 13:36:49.969486] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:50.961 13:36:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:50.961 13:36:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:16:50.961 13:36:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:50.961 13:36:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.961 13:36:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:50.961 [2024-11-20 13:36:50.360478] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:50.961 [2024-11-20 13:36:50.360535] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:50.961 [2024-11-20 13:36:50.360547] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:50.961 [2024-11-20 13:36:50.360561] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:50.961 [2024-11-20 13:36:50.360570] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:50.961 [2024-11-20 13:36:50.360583] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:50.961 [2024-11-20 13:36:50.360598] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:50.961 [2024-11-20 13:36:50.360611] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:50.961 13:36:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.961 13:36:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:16:50.961 13:36:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:50.961 13:36:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:50.961 13:36:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:50.961 13:36:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:50.962 13:36:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:50.962 13:36:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:50.962 13:36:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:50.962 13:36:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:50.962 13:36:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:50.962 13:36:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:50.962 13:36:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.962 13:36:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:50.962 13:36:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:50.962 13:36:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.962 13:36:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:50.962 "name": "Existed_Raid", 00:16:50.962 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:50.962 "strip_size_kb": 0, 00:16:50.962 "state": "configuring", 00:16:50.962 "raid_level": "raid1", 00:16:50.962 "superblock": false, 00:16:50.962 "num_base_bdevs": 4, 00:16:50.962 "num_base_bdevs_discovered": 0, 00:16:50.962 "num_base_bdevs_operational": 4, 00:16:50.962 "base_bdevs_list": [ 00:16:50.962 { 00:16:50.962 "name": "BaseBdev1", 00:16:50.962 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:50.962 "is_configured": false, 00:16:50.962 "data_offset": 0, 00:16:50.962 "data_size": 0 00:16:50.962 }, 00:16:50.962 { 00:16:50.962 "name": "BaseBdev2", 00:16:50.962 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:50.962 "is_configured": false, 00:16:50.962 "data_offset": 0, 00:16:50.962 "data_size": 0 00:16:50.962 }, 00:16:50.962 { 00:16:50.962 "name": "BaseBdev3", 00:16:50.962 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:50.962 "is_configured": false, 00:16:50.962 "data_offset": 0, 00:16:50.962 "data_size": 0 00:16:50.962 }, 00:16:50.962 { 00:16:50.962 "name": "BaseBdev4", 00:16:50.962 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:50.962 "is_configured": false, 00:16:50.962 "data_offset": 0, 00:16:50.962 "data_size": 0 00:16:50.962 } 00:16:50.962 ] 00:16:50.962 }' 00:16:50.962 13:36:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:50.962 13:36:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:51.530 13:36:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:51.530 13:36:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.530 13:36:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:51.530 [2024-11-20 13:36:50.783871] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:51.530 [2024-11-20 13:36:50.784112] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:16:51.530 13:36:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.530 13:36:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:51.530 13:36:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.530 13:36:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:51.530 [2024-11-20 13:36:50.791838] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:51.530 [2024-11-20 13:36:50.791998] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:51.530 [2024-11-20 13:36:50.792017] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:51.530 [2024-11-20 13:36:50.792032] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:51.530 [2024-11-20 13:36:50.792040] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:51.530 [2024-11-20 13:36:50.792052] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:51.530 [2024-11-20 13:36:50.792071] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:51.530 [2024-11-20 13:36:50.792084] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:51.530 13:36:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.530 13:36:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:16:51.530 13:36:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.530 13:36:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:51.530 [2024-11-20 13:36:50.838740] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:51.530 BaseBdev1 00:16:51.530 13:36:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.530 13:36:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:16:51.530 13:36:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:16:51.530 13:36:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:51.530 13:36:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:51.530 13:36:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:51.530 13:36:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:51.530 13:36:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:51.530 13:36:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.530 13:36:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:51.530 13:36:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.530 13:36:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:51.530 13:36:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.530 13:36:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:51.530 [ 00:16:51.530 { 00:16:51.530 "name": "BaseBdev1", 00:16:51.530 "aliases": [ 00:16:51.530 "4d67876e-4188-41d6-a950-42f14c67edd8" 00:16:51.530 ], 00:16:51.530 "product_name": "Malloc disk", 00:16:51.530 "block_size": 512, 00:16:51.530 "num_blocks": 65536, 00:16:51.530 "uuid": "4d67876e-4188-41d6-a950-42f14c67edd8", 00:16:51.530 "assigned_rate_limits": { 00:16:51.530 "rw_ios_per_sec": 0, 00:16:51.530 "rw_mbytes_per_sec": 0, 00:16:51.530 "r_mbytes_per_sec": 0, 00:16:51.530 "w_mbytes_per_sec": 0 00:16:51.530 }, 00:16:51.530 "claimed": true, 00:16:51.530 "claim_type": "exclusive_write", 00:16:51.530 "zoned": false, 00:16:51.530 "supported_io_types": { 00:16:51.530 "read": true, 00:16:51.530 "write": true, 00:16:51.530 "unmap": true, 00:16:51.530 "flush": true, 00:16:51.530 "reset": true, 00:16:51.530 "nvme_admin": false, 00:16:51.530 "nvme_io": false, 00:16:51.530 "nvme_io_md": false, 00:16:51.530 "write_zeroes": true, 00:16:51.530 "zcopy": true, 00:16:51.530 "get_zone_info": false, 00:16:51.530 "zone_management": false, 00:16:51.530 "zone_append": false, 00:16:51.530 "compare": false, 00:16:51.530 "compare_and_write": false, 00:16:51.530 "abort": true, 00:16:51.530 "seek_hole": false, 00:16:51.530 "seek_data": false, 00:16:51.530 "copy": true, 00:16:51.530 "nvme_iov_md": false 00:16:51.530 }, 00:16:51.530 "memory_domains": [ 00:16:51.530 { 00:16:51.530 "dma_device_id": "system", 00:16:51.530 "dma_device_type": 1 00:16:51.530 }, 00:16:51.530 { 00:16:51.530 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:51.530 "dma_device_type": 2 00:16:51.530 } 00:16:51.530 ], 00:16:51.530 "driver_specific": {} 00:16:51.530 } 00:16:51.530 ] 00:16:51.530 13:36:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.530 13:36:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:51.530 13:36:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:16:51.530 13:36:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:51.530 13:36:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:51.530 13:36:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:51.530 13:36:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:51.530 13:36:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:51.530 13:36:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:51.531 13:36:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:51.531 13:36:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:51.531 13:36:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:51.531 13:36:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:51.531 13:36:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.531 13:36:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:51.531 13:36:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:51.531 13:36:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.531 13:36:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:51.531 "name": "Existed_Raid", 00:16:51.531 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:51.531 "strip_size_kb": 0, 00:16:51.531 "state": "configuring", 00:16:51.531 "raid_level": "raid1", 00:16:51.531 "superblock": false, 00:16:51.531 "num_base_bdevs": 4, 00:16:51.531 "num_base_bdevs_discovered": 1, 00:16:51.531 "num_base_bdevs_operational": 4, 00:16:51.531 "base_bdevs_list": [ 00:16:51.531 { 00:16:51.531 "name": "BaseBdev1", 00:16:51.531 "uuid": "4d67876e-4188-41d6-a950-42f14c67edd8", 00:16:51.531 "is_configured": true, 00:16:51.531 "data_offset": 0, 00:16:51.531 "data_size": 65536 00:16:51.531 }, 00:16:51.531 { 00:16:51.531 "name": "BaseBdev2", 00:16:51.531 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:51.531 "is_configured": false, 00:16:51.531 "data_offset": 0, 00:16:51.531 "data_size": 0 00:16:51.531 }, 00:16:51.531 { 00:16:51.531 "name": "BaseBdev3", 00:16:51.531 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:51.531 "is_configured": false, 00:16:51.531 "data_offset": 0, 00:16:51.531 "data_size": 0 00:16:51.531 }, 00:16:51.531 { 00:16:51.531 "name": "BaseBdev4", 00:16:51.531 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:51.531 "is_configured": false, 00:16:51.531 "data_offset": 0, 00:16:51.531 "data_size": 0 00:16:51.531 } 00:16:51.531 ] 00:16:51.531 }' 00:16:51.531 13:36:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:51.531 13:36:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:52.098 13:36:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:52.098 13:36:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.098 13:36:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:52.098 [2024-11-20 13:36:51.306420] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:52.098 [2024-11-20 13:36:51.306475] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:16:52.098 13:36:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.098 13:36:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:52.098 13:36:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.098 13:36:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:52.098 [2024-11-20 13:36:51.314466] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:52.098 [2024-11-20 13:36:51.316570] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:52.098 [2024-11-20 13:36:51.316620] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:52.098 [2024-11-20 13:36:51.316632] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:52.098 [2024-11-20 13:36:51.316647] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:52.098 [2024-11-20 13:36:51.316656] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:52.098 [2024-11-20 13:36:51.316668] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:52.098 13:36:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.099 13:36:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:16:52.099 13:36:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:52.099 13:36:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:16:52.099 13:36:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:52.099 13:36:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:52.099 13:36:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:52.099 13:36:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:52.099 13:36:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:52.099 13:36:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:52.099 13:36:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:52.099 13:36:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:52.099 13:36:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:52.099 13:36:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:52.099 13:36:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.099 13:36:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:52.099 13:36:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:52.099 13:36:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.099 13:36:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:52.099 "name": "Existed_Raid", 00:16:52.099 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:52.099 "strip_size_kb": 0, 00:16:52.099 "state": "configuring", 00:16:52.099 "raid_level": "raid1", 00:16:52.099 "superblock": false, 00:16:52.099 "num_base_bdevs": 4, 00:16:52.099 "num_base_bdevs_discovered": 1, 00:16:52.099 "num_base_bdevs_operational": 4, 00:16:52.099 "base_bdevs_list": [ 00:16:52.099 { 00:16:52.099 "name": "BaseBdev1", 00:16:52.099 "uuid": "4d67876e-4188-41d6-a950-42f14c67edd8", 00:16:52.099 "is_configured": true, 00:16:52.099 "data_offset": 0, 00:16:52.099 "data_size": 65536 00:16:52.099 }, 00:16:52.099 { 00:16:52.099 "name": "BaseBdev2", 00:16:52.099 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:52.099 "is_configured": false, 00:16:52.099 "data_offset": 0, 00:16:52.099 "data_size": 0 00:16:52.099 }, 00:16:52.099 { 00:16:52.099 "name": "BaseBdev3", 00:16:52.099 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:52.099 "is_configured": false, 00:16:52.099 "data_offset": 0, 00:16:52.099 "data_size": 0 00:16:52.099 }, 00:16:52.099 { 00:16:52.099 "name": "BaseBdev4", 00:16:52.099 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:52.099 "is_configured": false, 00:16:52.099 "data_offset": 0, 00:16:52.099 "data_size": 0 00:16:52.099 } 00:16:52.099 ] 00:16:52.099 }' 00:16:52.099 13:36:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:52.099 13:36:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:52.359 13:36:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:16:52.359 13:36:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.359 13:36:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:52.359 [2024-11-20 13:36:51.773700] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:52.359 BaseBdev2 00:16:52.359 13:36:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.359 13:36:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:16:52.359 13:36:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:16:52.359 13:36:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:52.359 13:36:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:52.359 13:36:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:52.359 13:36:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:52.359 13:36:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:52.359 13:36:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.359 13:36:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:52.359 13:36:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.359 13:36:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:52.359 13:36:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.359 13:36:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:52.359 [ 00:16:52.359 { 00:16:52.359 "name": "BaseBdev2", 00:16:52.359 "aliases": [ 00:16:52.359 "dc70ab16-5dd6-4a5a-8a06-e0c6d8342650" 00:16:52.359 ], 00:16:52.359 "product_name": "Malloc disk", 00:16:52.359 "block_size": 512, 00:16:52.359 "num_blocks": 65536, 00:16:52.359 "uuid": "dc70ab16-5dd6-4a5a-8a06-e0c6d8342650", 00:16:52.359 "assigned_rate_limits": { 00:16:52.359 "rw_ios_per_sec": 0, 00:16:52.359 "rw_mbytes_per_sec": 0, 00:16:52.359 "r_mbytes_per_sec": 0, 00:16:52.359 "w_mbytes_per_sec": 0 00:16:52.359 }, 00:16:52.359 "claimed": true, 00:16:52.359 "claim_type": "exclusive_write", 00:16:52.359 "zoned": false, 00:16:52.359 "supported_io_types": { 00:16:52.359 "read": true, 00:16:52.359 "write": true, 00:16:52.359 "unmap": true, 00:16:52.359 "flush": true, 00:16:52.359 "reset": true, 00:16:52.359 "nvme_admin": false, 00:16:52.359 "nvme_io": false, 00:16:52.359 "nvme_io_md": false, 00:16:52.359 "write_zeroes": true, 00:16:52.359 "zcopy": true, 00:16:52.359 "get_zone_info": false, 00:16:52.359 "zone_management": false, 00:16:52.359 "zone_append": false, 00:16:52.359 "compare": false, 00:16:52.359 "compare_and_write": false, 00:16:52.359 "abort": true, 00:16:52.359 "seek_hole": false, 00:16:52.359 "seek_data": false, 00:16:52.359 "copy": true, 00:16:52.359 "nvme_iov_md": false 00:16:52.359 }, 00:16:52.359 "memory_domains": [ 00:16:52.359 { 00:16:52.359 "dma_device_id": "system", 00:16:52.359 "dma_device_type": 1 00:16:52.359 }, 00:16:52.359 { 00:16:52.359 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:52.359 "dma_device_type": 2 00:16:52.359 } 00:16:52.359 ], 00:16:52.359 "driver_specific": {} 00:16:52.359 } 00:16:52.359 ] 00:16:52.359 13:36:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.359 13:36:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:52.359 13:36:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:52.359 13:36:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:52.359 13:36:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:16:52.359 13:36:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:52.359 13:36:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:52.359 13:36:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:52.359 13:36:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:52.359 13:36:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:52.359 13:36:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:52.359 13:36:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:52.359 13:36:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:52.359 13:36:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:52.359 13:36:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:52.359 13:36:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:52.359 13:36:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.359 13:36:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:52.359 13:36:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.618 13:36:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:52.618 "name": "Existed_Raid", 00:16:52.619 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:52.619 "strip_size_kb": 0, 00:16:52.619 "state": "configuring", 00:16:52.619 "raid_level": "raid1", 00:16:52.619 "superblock": false, 00:16:52.619 "num_base_bdevs": 4, 00:16:52.619 "num_base_bdevs_discovered": 2, 00:16:52.619 "num_base_bdevs_operational": 4, 00:16:52.619 "base_bdevs_list": [ 00:16:52.619 { 00:16:52.619 "name": "BaseBdev1", 00:16:52.619 "uuid": "4d67876e-4188-41d6-a950-42f14c67edd8", 00:16:52.619 "is_configured": true, 00:16:52.619 "data_offset": 0, 00:16:52.619 "data_size": 65536 00:16:52.619 }, 00:16:52.619 { 00:16:52.619 "name": "BaseBdev2", 00:16:52.619 "uuid": "dc70ab16-5dd6-4a5a-8a06-e0c6d8342650", 00:16:52.619 "is_configured": true, 00:16:52.619 "data_offset": 0, 00:16:52.619 "data_size": 65536 00:16:52.619 }, 00:16:52.619 { 00:16:52.619 "name": "BaseBdev3", 00:16:52.619 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:52.619 "is_configured": false, 00:16:52.619 "data_offset": 0, 00:16:52.619 "data_size": 0 00:16:52.619 }, 00:16:52.619 { 00:16:52.619 "name": "BaseBdev4", 00:16:52.619 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:52.619 "is_configured": false, 00:16:52.619 "data_offset": 0, 00:16:52.619 "data_size": 0 00:16:52.619 } 00:16:52.619 ] 00:16:52.619 }' 00:16:52.619 13:36:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:52.619 13:36:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:52.878 13:36:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:16:52.878 13:36:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.878 13:36:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:52.878 [2024-11-20 13:36:52.271465] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:52.878 BaseBdev3 00:16:52.878 13:36:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.878 13:36:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:16:52.878 13:36:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:16:52.878 13:36:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:52.878 13:36:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:52.878 13:36:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:52.878 13:36:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:52.878 13:36:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:52.878 13:36:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.878 13:36:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:52.878 13:36:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.878 13:36:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:52.878 13:36:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.878 13:36:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:52.878 [ 00:16:52.878 { 00:16:52.878 "name": "BaseBdev3", 00:16:52.878 "aliases": [ 00:16:52.878 "9121ca08-c5cb-4a33-9a7b-fdb6a0c8da26" 00:16:52.878 ], 00:16:52.878 "product_name": "Malloc disk", 00:16:52.878 "block_size": 512, 00:16:52.878 "num_blocks": 65536, 00:16:52.878 "uuid": "9121ca08-c5cb-4a33-9a7b-fdb6a0c8da26", 00:16:52.878 "assigned_rate_limits": { 00:16:52.878 "rw_ios_per_sec": 0, 00:16:52.878 "rw_mbytes_per_sec": 0, 00:16:52.878 "r_mbytes_per_sec": 0, 00:16:52.878 "w_mbytes_per_sec": 0 00:16:52.878 }, 00:16:52.878 "claimed": true, 00:16:52.878 "claim_type": "exclusive_write", 00:16:52.878 "zoned": false, 00:16:52.878 "supported_io_types": { 00:16:52.878 "read": true, 00:16:52.878 "write": true, 00:16:52.878 "unmap": true, 00:16:52.878 "flush": true, 00:16:52.878 "reset": true, 00:16:52.878 "nvme_admin": false, 00:16:52.878 "nvme_io": false, 00:16:52.878 "nvme_io_md": false, 00:16:52.878 "write_zeroes": true, 00:16:52.878 "zcopy": true, 00:16:52.878 "get_zone_info": false, 00:16:52.878 "zone_management": false, 00:16:52.878 "zone_append": false, 00:16:52.878 "compare": false, 00:16:52.878 "compare_and_write": false, 00:16:52.878 "abort": true, 00:16:52.878 "seek_hole": false, 00:16:52.878 "seek_data": false, 00:16:52.878 "copy": true, 00:16:52.878 "nvme_iov_md": false 00:16:52.878 }, 00:16:52.878 "memory_domains": [ 00:16:52.878 { 00:16:52.878 "dma_device_id": "system", 00:16:52.878 "dma_device_type": 1 00:16:52.878 }, 00:16:52.878 { 00:16:52.878 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:52.878 "dma_device_type": 2 00:16:52.878 } 00:16:52.878 ], 00:16:52.878 "driver_specific": {} 00:16:52.878 } 00:16:52.878 ] 00:16:52.878 13:36:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.878 13:36:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:52.878 13:36:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:52.878 13:36:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:52.878 13:36:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:16:52.878 13:36:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:52.878 13:36:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:52.878 13:36:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:52.878 13:36:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:52.878 13:36:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:52.879 13:36:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:52.879 13:36:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:52.879 13:36:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:52.879 13:36:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:52.879 13:36:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:52.879 13:36:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.879 13:36:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:52.879 13:36:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:52.879 13:36:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.879 13:36:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:52.879 "name": "Existed_Raid", 00:16:52.879 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:52.879 "strip_size_kb": 0, 00:16:52.879 "state": "configuring", 00:16:52.879 "raid_level": "raid1", 00:16:52.879 "superblock": false, 00:16:52.879 "num_base_bdevs": 4, 00:16:52.879 "num_base_bdevs_discovered": 3, 00:16:52.879 "num_base_bdevs_operational": 4, 00:16:52.879 "base_bdevs_list": [ 00:16:52.879 { 00:16:52.879 "name": "BaseBdev1", 00:16:52.879 "uuid": "4d67876e-4188-41d6-a950-42f14c67edd8", 00:16:52.879 "is_configured": true, 00:16:52.879 "data_offset": 0, 00:16:52.879 "data_size": 65536 00:16:52.879 }, 00:16:52.879 { 00:16:52.879 "name": "BaseBdev2", 00:16:52.879 "uuid": "dc70ab16-5dd6-4a5a-8a06-e0c6d8342650", 00:16:52.879 "is_configured": true, 00:16:52.879 "data_offset": 0, 00:16:52.879 "data_size": 65536 00:16:52.879 }, 00:16:52.879 { 00:16:52.879 "name": "BaseBdev3", 00:16:52.879 "uuid": "9121ca08-c5cb-4a33-9a7b-fdb6a0c8da26", 00:16:52.879 "is_configured": true, 00:16:52.879 "data_offset": 0, 00:16:52.879 "data_size": 65536 00:16:52.879 }, 00:16:52.879 { 00:16:52.879 "name": "BaseBdev4", 00:16:52.879 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:52.879 "is_configured": false, 00:16:52.879 "data_offset": 0, 00:16:52.879 "data_size": 0 00:16:52.879 } 00:16:52.879 ] 00:16:52.879 }' 00:16:52.879 13:36:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:52.879 13:36:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:53.447 13:36:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:16:53.447 13:36:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.447 13:36:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:53.447 [2024-11-20 13:36:52.737252] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:53.447 [2024-11-20 13:36:52.737310] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:16:53.447 [2024-11-20 13:36:52.737321] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:16:53.447 [2024-11-20 13:36:52.737627] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:16:53.447 [2024-11-20 13:36:52.737805] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:16:53.447 [2024-11-20 13:36:52.737820] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:16:53.447 [2024-11-20 13:36:52.738120] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:53.447 BaseBdev4 00:16:53.447 13:36:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.447 13:36:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:16:53.447 13:36:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:16:53.447 13:36:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:53.447 13:36:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:53.447 13:36:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:53.447 13:36:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:53.447 13:36:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:53.447 13:36:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.447 13:36:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:53.447 13:36:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.447 13:36:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:16:53.448 13:36:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.448 13:36:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:53.448 [ 00:16:53.448 { 00:16:53.448 "name": "BaseBdev4", 00:16:53.448 "aliases": [ 00:16:53.448 "b5d62b8e-f6ea-4d5d-94d7-e684a221897d" 00:16:53.448 ], 00:16:53.448 "product_name": "Malloc disk", 00:16:53.448 "block_size": 512, 00:16:53.448 "num_blocks": 65536, 00:16:53.448 "uuid": "b5d62b8e-f6ea-4d5d-94d7-e684a221897d", 00:16:53.448 "assigned_rate_limits": { 00:16:53.448 "rw_ios_per_sec": 0, 00:16:53.448 "rw_mbytes_per_sec": 0, 00:16:53.448 "r_mbytes_per_sec": 0, 00:16:53.448 "w_mbytes_per_sec": 0 00:16:53.448 }, 00:16:53.448 "claimed": true, 00:16:53.448 "claim_type": "exclusive_write", 00:16:53.448 "zoned": false, 00:16:53.448 "supported_io_types": { 00:16:53.448 "read": true, 00:16:53.448 "write": true, 00:16:53.448 "unmap": true, 00:16:53.448 "flush": true, 00:16:53.448 "reset": true, 00:16:53.448 "nvme_admin": false, 00:16:53.448 "nvme_io": false, 00:16:53.448 "nvme_io_md": false, 00:16:53.448 "write_zeroes": true, 00:16:53.448 "zcopy": true, 00:16:53.448 "get_zone_info": false, 00:16:53.448 "zone_management": false, 00:16:53.448 "zone_append": false, 00:16:53.448 "compare": false, 00:16:53.448 "compare_and_write": false, 00:16:53.448 "abort": true, 00:16:53.448 "seek_hole": false, 00:16:53.448 "seek_data": false, 00:16:53.448 "copy": true, 00:16:53.448 "nvme_iov_md": false 00:16:53.448 }, 00:16:53.448 "memory_domains": [ 00:16:53.448 { 00:16:53.448 "dma_device_id": "system", 00:16:53.448 "dma_device_type": 1 00:16:53.448 }, 00:16:53.448 { 00:16:53.448 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:53.448 "dma_device_type": 2 00:16:53.448 } 00:16:53.448 ], 00:16:53.448 "driver_specific": {} 00:16:53.448 } 00:16:53.448 ] 00:16:53.448 13:36:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.448 13:36:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:53.448 13:36:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:53.448 13:36:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:53.448 13:36:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:16:53.448 13:36:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:53.448 13:36:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:53.448 13:36:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:53.448 13:36:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:53.448 13:36:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:53.448 13:36:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:53.448 13:36:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:53.448 13:36:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:53.448 13:36:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:53.448 13:36:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:53.448 13:36:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:53.448 13:36:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.448 13:36:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:53.448 13:36:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.448 13:36:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:53.448 "name": "Existed_Raid", 00:16:53.448 "uuid": "cd2a39e6-e262-422b-8229-2b4e08540939", 00:16:53.448 "strip_size_kb": 0, 00:16:53.448 "state": "online", 00:16:53.448 "raid_level": "raid1", 00:16:53.448 "superblock": false, 00:16:53.448 "num_base_bdevs": 4, 00:16:53.448 "num_base_bdevs_discovered": 4, 00:16:53.448 "num_base_bdevs_operational": 4, 00:16:53.448 "base_bdevs_list": [ 00:16:53.448 { 00:16:53.448 "name": "BaseBdev1", 00:16:53.448 "uuid": "4d67876e-4188-41d6-a950-42f14c67edd8", 00:16:53.448 "is_configured": true, 00:16:53.448 "data_offset": 0, 00:16:53.448 "data_size": 65536 00:16:53.448 }, 00:16:53.448 { 00:16:53.448 "name": "BaseBdev2", 00:16:53.448 "uuid": "dc70ab16-5dd6-4a5a-8a06-e0c6d8342650", 00:16:53.448 "is_configured": true, 00:16:53.448 "data_offset": 0, 00:16:53.448 "data_size": 65536 00:16:53.448 }, 00:16:53.448 { 00:16:53.448 "name": "BaseBdev3", 00:16:53.448 "uuid": "9121ca08-c5cb-4a33-9a7b-fdb6a0c8da26", 00:16:53.448 "is_configured": true, 00:16:53.448 "data_offset": 0, 00:16:53.448 "data_size": 65536 00:16:53.448 }, 00:16:53.448 { 00:16:53.448 "name": "BaseBdev4", 00:16:53.448 "uuid": "b5d62b8e-f6ea-4d5d-94d7-e684a221897d", 00:16:53.448 "is_configured": true, 00:16:53.448 "data_offset": 0, 00:16:53.448 "data_size": 65536 00:16:53.448 } 00:16:53.448 ] 00:16:53.448 }' 00:16:53.448 13:36:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:53.448 13:36:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:54.018 13:36:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:16:54.018 13:36:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:54.018 13:36:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:54.018 13:36:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:54.018 13:36:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:16:54.018 13:36:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:54.018 13:36:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:54.018 13:36:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:54.018 13:36:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.018 13:36:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:54.018 [2024-11-20 13:36:53.213006] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:54.018 13:36:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.018 13:36:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:54.018 "name": "Existed_Raid", 00:16:54.018 "aliases": [ 00:16:54.018 "cd2a39e6-e262-422b-8229-2b4e08540939" 00:16:54.018 ], 00:16:54.018 "product_name": "Raid Volume", 00:16:54.018 "block_size": 512, 00:16:54.018 "num_blocks": 65536, 00:16:54.018 "uuid": "cd2a39e6-e262-422b-8229-2b4e08540939", 00:16:54.018 "assigned_rate_limits": { 00:16:54.018 "rw_ios_per_sec": 0, 00:16:54.018 "rw_mbytes_per_sec": 0, 00:16:54.018 "r_mbytes_per_sec": 0, 00:16:54.018 "w_mbytes_per_sec": 0 00:16:54.018 }, 00:16:54.018 "claimed": false, 00:16:54.018 "zoned": false, 00:16:54.018 "supported_io_types": { 00:16:54.018 "read": true, 00:16:54.018 "write": true, 00:16:54.018 "unmap": false, 00:16:54.018 "flush": false, 00:16:54.018 "reset": true, 00:16:54.018 "nvme_admin": false, 00:16:54.018 "nvme_io": false, 00:16:54.018 "nvme_io_md": false, 00:16:54.018 "write_zeroes": true, 00:16:54.018 "zcopy": false, 00:16:54.018 "get_zone_info": false, 00:16:54.018 "zone_management": false, 00:16:54.018 "zone_append": false, 00:16:54.018 "compare": false, 00:16:54.018 "compare_and_write": false, 00:16:54.018 "abort": false, 00:16:54.018 "seek_hole": false, 00:16:54.018 "seek_data": false, 00:16:54.018 "copy": false, 00:16:54.018 "nvme_iov_md": false 00:16:54.018 }, 00:16:54.018 "memory_domains": [ 00:16:54.018 { 00:16:54.018 "dma_device_id": "system", 00:16:54.018 "dma_device_type": 1 00:16:54.018 }, 00:16:54.018 { 00:16:54.018 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:54.018 "dma_device_type": 2 00:16:54.018 }, 00:16:54.018 { 00:16:54.018 "dma_device_id": "system", 00:16:54.018 "dma_device_type": 1 00:16:54.018 }, 00:16:54.018 { 00:16:54.018 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:54.018 "dma_device_type": 2 00:16:54.018 }, 00:16:54.018 { 00:16:54.018 "dma_device_id": "system", 00:16:54.018 "dma_device_type": 1 00:16:54.018 }, 00:16:54.018 { 00:16:54.018 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:54.018 "dma_device_type": 2 00:16:54.018 }, 00:16:54.018 { 00:16:54.018 "dma_device_id": "system", 00:16:54.018 "dma_device_type": 1 00:16:54.018 }, 00:16:54.018 { 00:16:54.018 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:54.018 "dma_device_type": 2 00:16:54.018 } 00:16:54.018 ], 00:16:54.018 "driver_specific": { 00:16:54.018 "raid": { 00:16:54.018 "uuid": "cd2a39e6-e262-422b-8229-2b4e08540939", 00:16:54.018 "strip_size_kb": 0, 00:16:54.018 "state": "online", 00:16:54.018 "raid_level": "raid1", 00:16:54.018 "superblock": false, 00:16:54.018 "num_base_bdevs": 4, 00:16:54.018 "num_base_bdevs_discovered": 4, 00:16:54.018 "num_base_bdevs_operational": 4, 00:16:54.018 "base_bdevs_list": [ 00:16:54.018 { 00:16:54.018 "name": "BaseBdev1", 00:16:54.018 "uuid": "4d67876e-4188-41d6-a950-42f14c67edd8", 00:16:54.018 "is_configured": true, 00:16:54.018 "data_offset": 0, 00:16:54.018 "data_size": 65536 00:16:54.018 }, 00:16:54.018 { 00:16:54.018 "name": "BaseBdev2", 00:16:54.018 "uuid": "dc70ab16-5dd6-4a5a-8a06-e0c6d8342650", 00:16:54.018 "is_configured": true, 00:16:54.018 "data_offset": 0, 00:16:54.018 "data_size": 65536 00:16:54.018 }, 00:16:54.018 { 00:16:54.018 "name": "BaseBdev3", 00:16:54.018 "uuid": "9121ca08-c5cb-4a33-9a7b-fdb6a0c8da26", 00:16:54.018 "is_configured": true, 00:16:54.018 "data_offset": 0, 00:16:54.018 "data_size": 65536 00:16:54.018 }, 00:16:54.018 { 00:16:54.018 "name": "BaseBdev4", 00:16:54.018 "uuid": "b5d62b8e-f6ea-4d5d-94d7-e684a221897d", 00:16:54.018 "is_configured": true, 00:16:54.018 "data_offset": 0, 00:16:54.018 "data_size": 65536 00:16:54.018 } 00:16:54.018 ] 00:16:54.018 } 00:16:54.018 } 00:16:54.018 }' 00:16:54.018 13:36:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:54.018 13:36:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:16:54.018 BaseBdev2 00:16:54.018 BaseBdev3 00:16:54.018 BaseBdev4' 00:16:54.018 13:36:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:54.018 13:36:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:54.018 13:36:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:54.018 13:36:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:16:54.018 13:36:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.018 13:36:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:54.018 13:36:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:54.018 13:36:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.018 13:36:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:54.018 13:36:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:54.018 13:36:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:54.018 13:36:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:54.018 13:36:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:54.018 13:36:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.018 13:36:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:54.019 13:36:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.019 13:36:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:54.019 13:36:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:54.019 13:36:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:54.019 13:36:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:16:54.019 13:36:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:54.019 13:36:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.019 13:36:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:54.019 13:36:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.019 13:36:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:54.019 13:36:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:54.019 13:36:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:54.019 13:36:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:16:54.019 13:36:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:54.019 13:36:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.019 13:36:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:54.019 13:36:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.019 13:36:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:54.019 13:36:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:54.019 13:36:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:54.019 13:36:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.019 13:36:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:54.278 [2024-11-20 13:36:53.504304] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:54.278 13:36:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.278 13:36:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:16:54.278 13:36:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:16:54.278 13:36:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:54.278 13:36:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:16:54.278 13:36:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:16:54.278 13:36:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:16:54.278 13:36:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:54.278 13:36:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:54.278 13:36:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:54.278 13:36:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:54.278 13:36:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:54.278 13:36:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:54.278 13:36:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:54.278 13:36:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:54.278 13:36:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:54.278 13:36:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:54.278 13:36:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:54.278 13:36:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.278 13:36:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:54.278 13:36:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.278 13:36:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:54.278 "name": "Existed_Raid", 00:16:54.278 "uuid": "cd2a39e6-e262-422b-8229-2b4e08540939", 00:16:54.278 "strip_size_kb": 0, 00:16:54.278 "state": "online", 00:16:54.278 "raid_level": "raid1", 00:16:54.278 "superblock": false, 00:16:54.278 "num_base_bdevs": 4, 00:16:54.278 "num_base_bdevs_discovered": 3, 00:16:54.278 "num_base_bdevs_operational": 3, 00:16:54.278 "base_bdevs_list": [ 00:16:54.278 { 00:16:54.278 "name": null, 00:16:54.278 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:54.278 "is_configured": false, 00:16:54.278 "data_offset": 0, 00:16:54.278 "data_size": 65536 00:16:54.278 }, 00:16:54.278 { 00:16:54.278 "name": "BaseBdev2", 00:16:54.278 "uuid": "dc70ab16-5dd6-4a5a-8a06-e0c6d8342650", 00:16:54.278 "is_configured": true, 00:16:54.278 "data_offset": 0, 00:16:54.278 "data_size": 65536 00:16:54.278 }, 00:16:54.278 { 00:16:54.278 "name": "BaseBdev3", 00:16:54.278 "uuid": "9121ca08-c5cb-4a33-9a7b-fdb6a0c8da26", 00:16:54.278 "is_configured": true, 00:16:54.278 "data_offset": 0, 00:16:54.278 "data_size": 65536 00:16:54.278 }, 00:16:54.278 { 00:16:54.278 "name": "BaseBdev4", 00:16:54.278 "uuid": "b5d62b8e-f6ea-4d5d-94d7-e684a221897d", 00:16:54.278 "is_configured": true, 00:16:54.278 "data_offset": 0, 00:16:54.278 "data_size": 65536 00:16:54.278 } 00:16:54.278 ] 00:16:54.278 }' 00:16:54.278 13:36:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:54.278 13:36:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:54.846 13:36:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:16:54.846 13:36:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:54.846 13:36:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:54.846 13:36:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:54.846 13:36:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.846 13:36:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:54.846 13:36:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.846 13:36:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:54.846 13:36:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:54.846 13:36:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:16:54.846 13:36:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.846 13:36:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:54.846 [2024-11-20 13:36:54.099117] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:54.846 13:36:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.846 13:36:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:54.846 13:36:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:54.846 13:36:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:54.846 13:36:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.846 13:36:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:54.846 13:36:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:54.846 13:36:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.846 13:36:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:54.846 13:36:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:54.846 13:36:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:16:54.846 13:36:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.846 13:36:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:54.846 [2024-11-20 13:36:54.252425] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:55.105 13:36:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.105 13:36:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:55.105 13:36:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:55.105 13:36:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:55.105 13:36:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.105 13:36:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:55.105 13:36:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:55.105 13:36:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.105 13:36:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:55.105 13:36:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:55.105 13:36:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:16:55.105 13:36:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.105 13:36:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:55.105 [2024-11-20 13:36:54.401778] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:16:55.105 [2024-11-20 13:36:54.401864] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:55.105 [2024-11-20 13:36:54.498191] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:55.105 [2024-11-20 13:36:54.498244] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:55.105 [2024-11-20 13:36:54.498259] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:16:55.105 13:36:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.105 13:36:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:55.105 13:36:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:55.105 13:36:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:55.105 13:36:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.105 13:36:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:55.105 13:36:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:16:55.105 13:36:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.105 13:36:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:16:55.105 13:36:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:16:55.105 13:36:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:16:55.105 13:36:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:16:55.105 13:36:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:55.105 13:36:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:16:55.105 13:36:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.105 13:36:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:55.365 BaseBdev2 00:16:55.365 13:36:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.365 13:36:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:16:55.365 13:36:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:16:55.365 13:36:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:55.365 13:36:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:55.365 13:36:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:55.365 13:36:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:55.365 13:36:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:55.365 13:36:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.365 13:36:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:55.365 13:36:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.365 13:36:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:55.365 13:36:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.365 13:36:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:55.365 [ 00:16:55.365 { 00:16:55.365 "name": "BaseBdev2", 00:16:55.365 "aliases": [ 00:16:55.365 "b086694b-7970-40ca-9e11-37ff2ab4ad20" 00:16:55.365 ], 00:16:55.365 "product_name": "Malloc disk", 00:16:55.365 "block_size": 512, 00:16:55.365 "num_blocks": 65536, 00:16:55.365 "uuid": "b086694b-7970-40ca-9e11-37ff2ab4ad20", 00:16:55.365 "assigned_rate_limits": { 00:16:55.365 "rw_ios_per_sec": 0, 00:16:55.365 "rw_mbytes_per_sec": 0, 00:16:55.365 "r_mbytes_per_sec": 0, 00:16:55.365 "w_mbytes_per_sec": 0 00:16:55.365 }, 00:16:55.365 "claimed": false, 00:16:55.365 "zoned": false, 00:16:55.365 "supported_io_types": { 00:16:55.365 "read": true, 00:16:55.365 "write": true, 00:16:55.365 "unmap": true, 00:16:55.365 "flush": true, 00:16:55.365 "reset": true, 00:16:55.365 "nvme_admin": false, 00:16:55.365 "nvme_io": false, 00:16:55.365 "nvme_io_md": false, 00:16:55.365 "write_zeroes": true, 00:16:55.365 "zcopy": true, 00:16:55.365 "get_zone_info": false, 00:16:55.365 "zone_management": false, 00:16:55.365 "zone_append": false, 00:16:55.365 "compare": false, 00:16:55.365 "compare_and_write": false, 00:16:55.365 "abort": true, 00:16:55.365 "seek_hole": false, 00:16:55.365 "seek_data": false, 00:16:55.365 "copy": true, 00:16:55.365 "nvme_iov_md": false 00:16:55.365 }, 00:16:55.365 "memory_domains": [ 00:16:55.365 { 00:16:55.365 "dma_device_id": "system", 00:16:55.365 "dma_device_type": 1 00:16:55.365 }, 00:16:55.365 { 00:16:55.365 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:55.365 "dma_device_type": 2 00:16:55.365 } 00:16:55.365 ], 00:16:55.365 "driver_specific": {} 00:16:55.365 } 00:16:55.365 ] 00:16:55.365 13:36:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.365 13:36:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:55.365 13:36:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:55.365 13:36:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:55.365 13:36:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:16:55.365 13:36:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.365 13:36:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:55.365 BaseBdev3 00:16:55.365 13:36:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.365 13:36:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:16:55.365 13:36:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:16:55.365 13:36:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:55.365 13:36:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:55.365 13:36:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:55.365 13:36:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:55.365 13:36:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:55.365 13:36:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.365 13:36:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:55.365 13:36:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.365 13:36:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:55.365 13:36:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.365 13:36:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:55.365 [ 00:16:55.365 { 00:16:55.365 "name": "BaseBdev3", 00:16:55.365 "aliases": [ 00:16:55.365 "12fc72b3-9318-45fe-9392-99e0e3aeebf7" 00:16:55.365 ], 00:16:55.365 "product_name": "Malloc disk", 00:16:55.365 "block_size": 512, 00:16:55.365 "num_blocks": 65536, 00:16:55.365 "uuid": "12fc72b3-9318-45fe-9392-99e0e3aeebf7", 00:16:55.365 "assigned_rate_limits": { 00:16:55.365 "rw_ios_per_sec": 0, 00:16:55.365 "rw_mbytes_per_sec": 0, 00:16:55.365 "r_mbytes_per_sec": 0, 00:16:55.365 "w_mbytes_per_sec": 0 00:16:55.365 }, 00:16:55.365 "claimed": false, 00:16:55.365 "zoned": false, 00:16:55.365 "supported_io_types": { 00:16:55.365 "read": true, 00:16:55.365 "write": true, 00:16:55.365 "unmap": true, 00:16:55.365 "flush": true, 00:16:55.365 "reset": true, 00:16:55.365 "nvme_admin": false, 00:16:55.365 "nvme_io": false, 00:16:55.365 "nvme_io_md": false, 00:16:55.365 "write_zeroes": true, 00:16:55.365 "zcopy": true, 00:16:55.365 "get_zone_info": false, 00:16:55.365 "zone_management": false, 00:16:55.365 "zone_append": false, 00:16:55.365 "compare": false, 00:16:55.365 "compare_and_write": false, 00:16:55.365 "abort": true, 00:16:55.365 "seek_hole": false, 00:16:55.365 "seek_data": false, 00:16:55.365 "copy": true, 00:16:55.365 "nvme_iov_md": false 00:16:55.365 }, 00:16:55.365 "memory_domains": [ 00:16:55.365 { 00:16:55.365 "dma_device_id": "system", 00:16:55.365 "dma_device_type": 1 00:16:55.365 }, 00:16:55.365 { 00:16:55.365 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:55.365 "dma_device_type": 2 00:16:55.365 } 00:16:55.365 ], 00:16:55.365 "driver_specific": {} 00:16:55.365 } 00:16:55.365 ] 00:16:55.365 13:36:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.365 13:36:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:55.365 13:36:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:55.365 13:36:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:55.365 13:36:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:16:55.365 13:36:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.365 13:36:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:55.365 BaseBdev4 00:16:55.365 13:36:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.365 13:36:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:16:55.365 13:36:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:16:55.365 13:36:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:55.365 13:36:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:55.365 13:36:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:55.365 13:36:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:55.365 13:36:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:55.365 13:36:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.365 13:36:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:55.365 13:36:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.365 13:36:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:16:55.366 13:36:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.366 13:36:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:55.366 [ 00:16:55.366 { 00:16:55.366 "name": "BaseBdev4", 00:16:55.366 "aliases": [ 00:16:55.366 "39821b4b-3b3d-4fdc-bcd7-3414992728d2" 00:16:55.366 ], 00:16:55.366 "product_name": "Malloc disk", 00:16:55.366 "block_size": 512, 00:16:55.366 "num_blocks": 65536, 00:16:55.366 "uuid": "39821b4b-3b3d-4fdc-bcd7-3414992728d2", 00:16:55.366 "assigned_rate_limits": { 00:16:55.366 "rw_ios_per_sec": 0, 00:16:55.366 "rw_mbytes_per_sec": 0, 00:16:55.366 "r_mbytes_per_sec": 0, 00:16:55.366 "w_mbytes_per_sec": 0 00:16:55.366 }, 00:16:55.366 "claimed": false, 00:16:55.366 "zoned": false, 00:16:55.366 "supported_io_types": { 00:16:55.366 "read": true, 00:16:55.366 "write": true, 00:16:55.366 "unmap": true, 00:16:55.366 "flush": true, 00:16:55.366 "reset": true, 00:16:55.366 "nvme_admin": false, 00:16:55.366 "nvme_io": false, 00:16:55.366 "nvme_io_md": false, 00:16:55.366 "write_zeroes": true, 00:16:55.366 "zcopy": true, 00:16:55.366 "get_zone_info": false, 00:16:55.366 "zone_management": false, 00:16:55.366 "zone_append": false, 00:16:55.366 "compare": false, 00:16:55.366 "compare_and_write": false, 00:16:55.366 "abort": true, 00:16:55.366 "seek_hole": false, 00:16:55.366 "seek_data": false, 00:16:55.366 "copy": true, 00:16:55.366 "nvme_iov_md": false 00:16:55.366 }, 00:16:55.366 "memory_domains": [ 00:16:55.366 { 00:16:55.366 "dma_device_id": "system", 00:16:55.366 "dma_device_type": 1 00:16:55.366 }, 00:16:55.366 { 00:16:55.366 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:55.366 "dma_device_type": 2 00:16:55.366 } 00:16:55.366 ], 00:16:55.366 "driver_specific": {} 00:16:55.366 } 00:16:55.366 ] 00:16:55.366 13:36:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.366 13:36:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:55.366 13:36:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:55.366 13:36:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:55.366 13:36:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:55.366 13:36:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.366 13:36:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:55.366 [2024-11-20 13:36:54.780113] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:55.366 [2024-11-20 13:36:54.780308] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:55.366 [2024-11-20 13:36:54.780414] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:55.366 [2024-11-20 13:36:54.782828] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:55.366 [2024-11-20 13:36:54.783003] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:55.366 13:36:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.366 13:36:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:16:55.366 13:36:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:55.366 13:36:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:55.366 13:36:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:55.366 13:36:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:55.366 13:36:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:55.366 13:36:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:55.366 13:36:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:55.366 13:36:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:55.366 13:36:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:55.366 13:36:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:55.366 13:36:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:55.366 13:36:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.366 13:36:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:55.366 13:36:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.366 13:36:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:55.366 "name": "Existed_Raid", 00:16:55.366 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:55.366 "strip_size_kb": 0, 00:16:55.366 "state": "configuring", 00:16:55.366 "raid_level": "raid1", 00:16:55.366 "superblock": false, 00:16:55.366 "num_base_bdevs": 4, 00:16:55.366 "num_base_bdevs_discovered": 3, 00:16:55.366 "num_base_bdevs_operational": 4, 00:16:55.366 "base_bdevs_list": [ 00:16:55.366 { 00:16:55.366 "name": "BaseBdev1", 00:16:55.366 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:55.366 "is_configured": false, 00:16:55.366 "data_offset": 0, 00:16:55.366 "data_size": 0 00:16:55.366 }, 00:16:55.366 { 00:16:55.366 "name": "BaseBdev2", 00:16:55.366 "uuid": "b086694b-7970-40ca-9e11-37ff2ab4ad20", 00:16:55.366 "is_configured": true, 00:16:55.366 "data_offset": 0, 00:16:55.366 "data_size": 65536 00:16:55.366 }, 00:16:55.366 { 00:16:55.366 "name": "BaseBdev3", 00:16:55.366 "uuid": "12fc72b3-9318-45fe-9392-99e0e3aeebf7", 00:16:55.366 "is_configured": true, 00:16:55.366 "data_offset": 0, 00:16:55.366 "data_size": 65536 00:16:55.366 }, 00:16:55.366 { 00:16:55.366 "name": "BaseBdev4", 00:16:55.366 "uuid": "39821b4b-3b3d-4fdc-bcd7-3414992728d2", 00:16:55.366 "is_configured": true, 00:16:55.366 "data_offset": 0, 00:16:55.366 "data_size": 65536 00:16:55.366 } 00:16:55.366 ] 00:16:55.366 }' 00:16:55.366 13:36:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:55.366 13:36:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:55.934 13:36:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:16:55.934 13:36:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.934 13:36:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:55.934 [2024-11-20 13:36:55.199547] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:55.934 13:36:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.934 13:36:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:16:55.934 13:36:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:55.934 13:36:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:55.934 13:36:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:55.934 13:36:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:55.934 13:36:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:55.934 13:36:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:55.934 13:36:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:55.934 13:36:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:55.934 13:36:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:55.934 13:36:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:55.934 13:36:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:55.934 13:36:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.934 13:36:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:55.934 13:36:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.934 13:36:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:55.934 "name": "Existed_Raid", 00:16:55.934 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:55.934 "strip_size_kb": 0, 00:16:55.934 "state": "configuring", 00:16:55.935 "raid_level": "raid1", 00:16:55.935 "superblock": false, 00:16:55.935 "num_base_bdevs": 4, 00:16:55.935 "num_base_bdevs_discovered": 2, 00:16:55.935 "num_base_bdevs_operational": 4, 00:16:55.935 "base_bdevs_list": [ 00:16:55.935 { 00:16:55.935 "name": "BaseBdev1", 00:16:55.935 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:55.935 "is_configured": false, 00:16:55.935 "data_offset": 0, 00:16:55.935 "data_size": 0 00:16:55.935 }, 00:16:55.935 { 00:16:55.935 "name": null, 00:16:55.935 "uuid": "b086694b-7970-40ca-9e11-37ff2ab4ad20", 00:16:55.935 "is_configured": false, 00:16:55.935 "data_offset": 0, 00:16:55.935 "data_size": 65536 00:16:55.935 }, 00:16:55.935 { 00:16:55.935 "name": "BaseBdev3", 00:16:55.935 "uuid": "12fc72b3-9318-45fe-9392-99e0e3aeebf7", 00:16:55.935 "is_configured": true, 00:16:55.935 "data_offset": 0, 00:16:55.935 "data_size": 65536 00:16:55.935 }, 00:16:55.935 { 00:16:55.935 "name": "BaseBdev4", 00:16:55.935 "uuid": "39821b4b-3b3d-4fdc-bcd7-3414992728d2", 00:16:55.935 "is_configured": true, 00:16:55.935 "data_offset": 0, 00:16:55.935 "data_size": 65536 00:16:55.935 } 00:16:55.935 ] 00:16:55.935 }' 00:16:55.935 13:36:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:55.935 13:36:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:56.502 13:36:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:56.502 13:36:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:56.502 13:36:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.502 13:36:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:56.502 13:36:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.502 13:36:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:16:56.502 13:36:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:16:56.502 13:36:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.502 13:36:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:56.502 [2024-11-20 13:36:55.761912] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:56.502 BaseBdev1 00:16:56.502 13:36:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.502 13:36:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:16:56.502 13:36:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:16:56.502 13:36:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:56.502 13:36:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:56.502 13:36:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:56.502 13:36:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:56.502 13:36:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:56.502 13:36:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.502 13:36:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:56.502 13:36:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.502 13:36:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:56.502 13:36:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.502 13:36:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:56.502 [ 00:16:56.502 { 00:16:56.502 "name": "BaseBdev1", 00:16:56.502 "aliases": [ 00:16:56.502 "98c22357-f42f-4392-a3e2-d83c6734e817" 00:16:56.502 ], 00:16:56.502 "product_name": "Malloc disk", 00:16:56.502 "block_size": 512, 00:16:56.502 "num_blocks": 65536, 00:16:56.502 "uuid": "98c22357-f42f-4392-a3e2-d83c6734e817", 00:16:56.502 "assigned_rate_limits": { 00:16:56.502 "rw_ios_per_sec": 0, 00:16:56.502 "rw_mbytes_per_sec": 0, 00:16:56.502 "r_mbytes_per_sec": 0, 00:16:56.502 "w_mbytes_per_sec": 0 00:16:56.502 }, 00:16:56.502 "claimed": true, 00:16:56.502 "claim_type": "exclusive_write", 00:16:56.502 "zoned": false, 00:16:56.502 "supported_io_types": { 00:16:56.502 "read": true, 00:16:56.502 "write": true, 00:16:56.502 "unmap": true, 00:16:56.502 "flush": true, 00:16:56.502 "reset": true, 00:16:56.502 "nvme_admin": false, 00:16:56.502 "nvme_io": false, 00:16:56.502 "nvme_io_md": false, 00:16:56.502 "write_zeroes": true, 00:16:56.502 "zcopy": true, 00:16:56.502 "get_zone_info": false, 00:16:56.502 "zone_management": false, 00:16:56.502 "zone_append": false, 00:16:56.502 "compare": false, 00:16:56.502 "compare_and_write": false, 00:16:56.503 "abort": true, 00:16:56.503 "seek_hole": false, 00:16:56.503 "seek_data": false, 00:16:56.503 "copy": true, 00:16:56.503 "nvme_iov_md": false 00:16:56.503 }, 00:16:56.503 "memory_domains": [ 00:16:56.503 { 00:16:56.503 "dma_device_id": "system", 00:16:56.503 "dma_device_type": 1 00:16:56.503 }, 00:16:56.503 { 00:16:56.503 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:56.503 "dma_device_type": 2 00:16:56.503 } 00:16:56.503 ], 00:16:56.503 "driver_specific": {} 00:16:56.503 } 00:16:56.503 ] 00:16:56.503 13:36:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.503 13:36:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:56.503 13:36:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:16:56.503 13:36:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:56.503 13:36:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:56.503 13:36:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:56.503 13:36:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:56.503 13:36:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:56.503 13:36:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:56.503 13:36:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:56.503 13:36:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:56.503 13:36:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:56.503 13:36:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:56.503 13:36:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:56.503 13:36:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.503 13:36:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:56.503 13:36:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.503 13:36:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:56.503 "name": "Existed_Raid", 00:16:56.503 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:56.503 "strip_size_kb": 0, 00:16:56.503 "state": "configuring", 00:16:56.503 "raid_level": "raid1", 00:16:56.503 "superblock": false, 00:16:56.503 "num_base_bdevs": 4, 00:16:56.503 "num_base_bdevs_discovered": 3, 00:16:56.503 "num_base_bdevs_operational": 4, 00:16:56.503 "base_bdevs_list": [ 00:16:56.503 { 00:16:56.503 "name": "BaseBdev1", 00:16:56.503 "uuid": "98c22357-f42f-4392-a3e2-d83c6734e817", 00:16:56.503 "is_configured": true, 00:16:56.503 "data_offset": 0, 00:16:56.503 "data_size": 65536 00:16:56.503 }, 00:16:56.503 { 00:16:56.503 "name": null, 00:16:56.503 "uuid": "b086694b-7970-40ca-9e11-37ff2ab4ad20", 00:16:56.503 "is_configured": false, 00:16:56.503 "data_offset": 0, 00:16:56.503 "data_size": 65536 00:16:56.503 }, 00:16:56.503 { 00:16:56.503 "name": "BaseBdev3", 00:16:56.503 "uuid": "12fc72b3-9318-45fe-9392-99e0e3aeebf7", 00:16:56.503 "is_configured": true, 00:16:56.503 "data_offset": 0, 00:16:56.503 "data_size": 65536 00:16:56.503 }, 00:16:56.503 { 00:16:56.503 "name": "BaseBdev4", 00:16:56.503 "uuid": "39821b4b-3b3d-4fdc-bcd7-3414992728d2", 00:16:56.503 "is_configured": true, 00:16:56.503 "data_offset": 0, 00:16:56.503 "data_size": 65536 00:16:56.503 } 00:16:56.503 ] 00:16:56.503 }' 00:16:56.503 13:36:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:56.503 13:36:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:56.761 13:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:56.761 13:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:56.761 13:36:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.761 13:36:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:56.761 13:36:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.761 13:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:16:56.761 13:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:16:56.761 13:36:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.761 13:36:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:56.761 [2024-11-20 13:36:56.233348] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:56.761 13:36:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.761 13:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:16:56.761 13:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:56.761 13:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:56.761 13:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:56.761 13:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:56.761 13:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:56.761 13:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:56.761 13:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:56.761 13:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:56.761 13:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:56.761 13:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:56.761 13:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:56.761 13:36:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.761 13:36:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.052 13:36:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.052 13:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:57.052 "name": "Existed_Raid", 00:16:57.052 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:57.052 "strip_size_kb": 0, 00:16:57.052 "state": "configuring", 00:16:57.052 "raid_level": "raid1", 00:16:57.052 "superblock": false, 00:16:57.052 "num_base_bdevs": 4, 00:16:57.052 "num_base_bdevs_discovered": 2, 00:16:57.052 "num_base_bdevs_operational": 4, 00:16:57.052 "base_bdevs_list": [ 00:16:57.052 { 00:16:57.052 "name": "BaseBdev1", 00:16:57.052 "uuid": "98c22357-f42f-4392-a3e2-d83c6734e817", 00:16:57.052 "is_configured": true, 00:16:57.052 "data_offset": 0, 00:16:57.052 "data_size": 65536 00:16:57.052 }, 00:16:57.052 { 00:16:57.052 "name": null, 00:16:57.052 "uuid": "b086694b-7970-40ca-9e11-37ff2ab4ad20", 00:16:57.052 "is_configured": false, 00:16:57.052 "data_offset": 0, 00:16:57.052 "data_size": 65536 00:16:57.052 }, 00:16:57.052 { 00:16:57.052 "name": null, 00:16:57.052 "uuid": "12fc72b3-9318-45fe-9392-99e0e3aeebf7", 00:16:57.052 "is_configured": false, 00:16:57.052 "data_offset": 0, 00:16:57.052 "data_size": 65536 00:16:57.052 }, 00:16:57.052 { 00:16:57.052 "name": "BaseBdev4", 00:16:57.052 "uuid": "39821b4b-3b3d-4fdc-bcd7-3414992728d2", 00:16:57.052 "is_configured": true, 00:16:57.052 "data_offset": 0, 00:16:57.052 "data_size": 65536 00:16:57.052 } 00:16:57.052 ] 00:16:57.052 }' 00:16:57.052 13:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:57.052 13:36:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.310 13:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:57.310 13:36:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.310 13:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:57.310 13:36:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.310 13:36:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.310 13:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:16:57.310 13:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:16:57.310 13:36:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.310 13:36:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.310 [2024-11-20 13:36:56.732647] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:57.310 13:36:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.310 13:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:16:57.310 13:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:57.310 13:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:57.310 13:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:57.310 13:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:57.310 13:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:57.310 13:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:57.310 13:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:57.310 13:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:57.310 13:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:57.310 13:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:57.310 13:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:57.310 13:36:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.310 13:36:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.310 13:36:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.569 13:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:57.569 "name": "Existed_Raid", 00:16:57.569 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:57.569 "strip_size_kb": 0, 00:16:57.569 "state": "configuring", 00:16:57.569 "raid_level": "raid1", 00:16:57.569 "superblock": false, 00:16:57.569 "num_base_bdevs": 4, 00:16:57.569 "num_base_bdevs_discovered": 3, 00:16:57.569 "num_base_bdevs_operational": 4, 00:16:57.569 "base_bdevs_list": [ 00:16:57.569 { 00:16:57.569 "name": "BaseBdev1", 00:16:57.569 "uuid": "98c22357-f42f-4392-a3e2-d83c6734e817", 00:16:57.569 "is_configured": true, 00:16:57.569 "data_offset": 0, 00:16:57.569 "data_size": 65536 00:16:57.569 }, 00:16:57.569 { 00:16:57.569 "name": null, 00:16:57.569 "uuid": "b086694b-7970-40ca-9e11-37ff2ab4ad20", 00:16:57.569 "is_configured": false, 00:16:57.569 "data_offset": 0, 00:16:57.569 "data_size": 65536 00:16:57.569 }, 00:16:57.569 { 00:16:57.569 "name": "BaseBdev3", 00:16:57.569 "uuid": "12fc72b3-9318-45fe-9392-99e0e3aeebf7", 00:16:57.569 "is_configured": true, 00:16:57.569 "data_offset": 0, 00:16:57.569 "data_size": 65536 00:16:57.569 }, 00:16:57.569 { 00:16:57.569 "name": "BaseBdev4", 00:16:57.569 "uuid": "39821b4b-3b3d-4fdc-bcd7-3414992728d2", 00:16:57.569 "is_configured": true, 00:16:57.569 "data_offset": 0, 00:16:57.569 "data_size": 65536 00:16:57.569 } 00:16:57.569 ] 00:16:57.569 }' 00:16:57.569 13:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:57.569 13:36:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.826 13:36:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:57.826 13:36:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:57.826 13:36:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.827 13:36:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.827 13:36:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.827 13:36:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:16:57.827 13:36:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:57.827 13:36:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.827 13:36:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.827 [2024-11-20 13:36:57.180121] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:57.827 13:36:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.827 13:36:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:16:57.827 13:36:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:57.827 13:36:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:57.827 13:36:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:57.827 13:36:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:57.827 13:36:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:57.827 13:36:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:57.827 13:36:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:57.827 13:36:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:57.827 13:36:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:57.827 13:36:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:57.827 13:36:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:57.827 13:36:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.827 13:36:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:58.085 13:36:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.085 13:36:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:58.085 "name": "Existed_Raid", 00:16:58.085 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:58.085 "strip_size_kb": 0, 00:16:58.085 "state": "configuring", 00:16:58.085 "raid_level": "raid1", 00:16:58.085 "superblock": false, 00:16:58.085 "num_base_bdevs": 4, 00:16:58.085 "num_base_bdevs_discovered": 2, 00:16:58.085 "num_base_bdevs_operational": 4, 00:16:58.085 "base_bdevs_list": [ 00:16:58.085 { 00:16:58.085 "name": null, 00:16:58.085 "uuid": "98c22357-f42f-4392-a3e2-d83c6734e817", 00:16:58.085 "is_configured": false, 00:16:58.085 "data_offset": 0, 00:16:58.085 "data_size": 65536 00:16:58.085 }, 00:16:58.085 { 00:16:58.085 "name": null, 00:16:58.085 "uuid": "b086694b-7970-40ca-9e11-37ff2ab4ad20", 00:16:58.085 "is_configured": false, 00:16:58.085 "data_offset": 0, 00:16:58.085 "data_size": 65536 00:16:58.085 }, 00:16:58.085 { 00:16:58.085 "name": "BaseBdev3", 00:16:58.085 "uuid": "12fc72b3-9318-45fe-9392-99e0e3aeebf7", 00:16:58.085 "is_configured": true, 00:16:58.085 "data_offset": 0, 00:16:58.085 "data_size": 65536 00:16:58.085 }, 00:16:58.085 { 00:16:58.085 "name": "BaseBdev4", 00:16:58.085 "uuid": "39821b4b-3b3d-4fdc-bcd7-3414992728d2", 00:16:58.085 "is_configured": true, 00:16:58.085 "data_offset": 0, 00:16:58.085 "data_size": 65536 00:16:58.085 } 00:16:58.085 ] 00:16:58.085 }' 00:16:58.085 13:36:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:58.085 13:36:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:58.344 13:36:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:58.344 13:36:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:58.344 13:36:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.344 13:36:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:58.344 13:36:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.344 13:36:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:16:58.344 13:36:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:16:58.344 13:36:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.344 13:36:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:58.344 [2024-11-20 13:36:57.764395] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:58.344 13:36:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.344 13:36:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:16:58.344 13:36:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:58.344 13:36:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:58.344 13:36:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:58.344 13:36:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:58.344 13:36:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:58.344 13:36:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:58.344 13:36:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:58.344 13:36:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:58.344 13:36:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:58.344 13:36:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:58.344 13:36:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:58.344 13:36:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.344 13:36:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:58.344 13:36:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.344 13:36:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:58.344 "name": "Existed_Raid", 00:16:58.344 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:58.344 "strip_size_kb": 0, 00:16:58.344 "state": "configuring", 00:16:58.344 "raid_level": "raid1", 00:16:58.344 "superblock": false, 00:16:58.344 "num_base_bdevs": 4, 00:16:58.344 "num_base_bdevs_discovered": 3, 00:16:58.344 "num_base_bdevs_operational": 4, 00:16:58.344 "base_bdevs_list": [ 00:16:58.344 { 00:16:58.344 "name": null, 00:16:58.344 "uuid": "98c22357-f42f-4392-a3e2-d83c6734e817", 00:16:58.344 "is_configured": false, 00:16:58.344 "data_offset": 0, 00:16:58.344 "data_size": 65536 00:16:58.344 }, 00:16:58.344 { 00:16:58.344 "name": "BaseBdev2", 00:16:58.344 "uuid": "b086694b-7970-40ca-9e11-37ff2ab4ad20", 00:16:58.344 "is_configured": true, 00:16:58.344 "data_offset": 0, 00:16:58.344 "data_size": 65536 00:16:58.344 }, 00:16:58.344 { 00:16:58.344 "name": "BaseBdev3", 00:16:58.344 "uuid": "12fc72b3-9318-45fe-9392-99e0e3aeebf7", 00:16:58.344 "is_configured": true, 00:16:58.344 "data_offset": 0, 00:16:58.344 "data_size": 65536 00:16:58.344 }, 00:16:58.344 { 00:16:58.344 "name": "BaseBdev4", 00:16:58.344 "uuid": "39821b4b-3b3d-4fdc-bcd7-3414992728d2", 00:16:58.344 "is_configured": true, 00:16:58.344 "data_offset": 0, 00:16:58.344 "data_size": 65536 00:16:58.344 } 00:16:58.344 ] 00:16:58.344 }' 00:16:58.344 13:36:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:58.344 13:36:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:58.909 13:36:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:58.909 13:36:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.909 13:36:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:58.909 13:36:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:58.909 13:36:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.909 13:36:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:16:58.909 13:36:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:58.909 13:36:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.909 13:36:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:58.909 13:36:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:16:58.909 13:36:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.909 13:36:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 98c22357-f42f-4392-a3e2-d83c6734e817 00:16:58.909 13:36:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.909 13:36:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:58.909 [2024-11-20 13:36:58.355779] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:16:58.909 [2024-11-20 13:36:58.355830] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:16:58.909 [2024-11-20 13:36:58.355842] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:16:58.909 [2024-11-20 13:36:58.356155] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:16:58.909 [2024-11-20 13:36:58.356321] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:16:58.909 [2024-11-20 13:36:58.356332] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:16:58.909 [2024-11-20 13:36:58.356596] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:58.909 NewBaseBdev 00:16:58.909 13:36:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.909 13:36:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:16:58.909 13:36:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:16:58.909 13:36:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:58.909 13:36:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:58.909 13:36:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:58.909 13:36:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:58.909 13:36:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:58.909 13:36:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.909 13:36:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:58.909 13:36:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.909 13:36:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:16:58.909 13:36:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.909 13:36:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:58.909 [ 00:16:58.909 { 00:16:58.909 "name": "NewBaseBdev", 00:16:58.909 "aliases": [ 00:16:58.909 "98c22357-f42f-4392-a3e2-d83c6734e817" 00:16:58.909 ], 00:16:58.909 "product_name": "Malloc disk", 00:16:58.909 "block_size": 512, 00:16:58.909 "num_blocks": 65536, 00:16:58.909 "uuid": "98c22357-f42f-4392-a3e2-d83c6734e817", 00:16:58.909 "assigned_rate_limits": { 00:16:58.909 "rw_ios_per_sec": 0, 00:16:58.909 "rw_mbytes_per_sec": 0, 00:16:58.909 "r_mbytes_per_sec": 0, 00:16:58.909 "w_mbytes_per_sec": 0 00:16:58.909 }, 00:16:58.909 "claimed": true, 00:16:58.909 "claim_type": "exclusive_write", 00:16:58.909 "zoned": false, 00:16:58.909 "supported_io_types": { 00:16:58.909 "read": true, 00:16:58.909 "write": true, 00:16:58.909 "unmap": true, 00:16:58.909 "flush": true, 00:16:58.909 "reset": true, 00:16:58.909 "nvme_admin": false, 00:16:58.909 "nvme_io": false, 00:16:58.910 "nvme_io_md": false, 00:16:59.168 "write_zeroes": true, 00:16:59.168 "zcopy": true, 00:16:59.168 "get_zone_info": false, 00:16:59.168 "zone_management": false, 00:16:59.168 "zone_append": false, 00:16:59.168 "compare": false, 00:16:59.168 "compare_and_write": false, 00:16:59.168 "abort": true, 00:16:59.168 "seek_hole": false, 00:16:59.168 "seek_data": false, 00:16:59.168 "copy": true, 00:16:59.168 "nvme_iov_md": false 00:16:59.168 }, 00:16:59.168 "memory_domains": [ 00:16:59.168 { 00:16:59.168 "dma_device_id": "system", 00:16:59.168 "dma_device_type": 1 00:16:59.168 }, 00:16:59.168 { 00:16:59.168 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:59.168 "dma_device_type": 2 00:16:59.168 } 00:16:59.168 ], 00:16:59.168 "driver_specific": {} 00:16:59.168 } 00:16:59.168 ] 00:16:59.168 13:36:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.168 13:36:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:59.168 13:36:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:16:59.168 13:36:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:59.168 13:36:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:59.168 13:36:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:59.168 13:36:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:59.168 13:36:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:59.168 13:36:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:59.168 13:36:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:59.168 13:36:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:59.168 13:36:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:59.168 13:36:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:59.168 13:36:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:59.168 13:36:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.168 13:36:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:59.168 13:36:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.168 13:36:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:59.168 "name": "Existed_Raid", 00:16:59.168 "uuid": "6b7236de-5532-4568-9f25-b24d71391a7c", 00:16:59.168 "strip_size_kb": 0, 00:16:59.168 "state": "online", 00:16:59.168 "raid_level": "raid1", 00:16:59.168 "superblock": false, 00:16:59.168 "num_base_bdevs": 4, 00:16:59.168 "num_base_bdevs_discovered": 4, 00:16:59.168 "num_base_bdevs_operational": 4, 00:16:59.168 "base_bdevs_list": [ 00:16:59.168 { 00:16:59.168 "name": "NewBaseBdev", 00:16:59.168 "uuid": "98c22357-f42f-4392-a3e2-d83c6734e817", 00:16:59.168 "is_configured": true, 00:16:59.168 "data_offset": 0, 00:16:59.168 "data_size": 65536 00:16:59.168 }, 00:16:59.168 { 00:16:59.168 "name": "BaseBdev2", 00:16:59.168 "uuid": "b086694b-7970-40ca-9e11-37ff2ab4ad20", 00:16:59.168 "is_configured": true, 00:16:59.168 "data_offset": 0, 00:16:59.168 "data_size": 65536 00:16:59.168 }, 00:16:59.168 { 00:16:59.168 "name": "BaseBdev3", 00:16:59.168 "uuid": "12fc72b3-9318-45fe-9392-99e0e3aeebf7", 00:16:59.168 "is_configured": true, 00:16:59.168 "data_offset": 0, 00:16:59.168 "data_size": 65536 00:16:59.168 }, 00:16:59.168 { 00:16:59.168 "name": "BaseBdev4", 00:16:59.168 "uuid": "39821b4b-3b3d-4fdc-bcd7-3414992728d2", 00:16:59.168 "is_configured": true, 00:16:59.168 "data_offset": 0, 00:16:59.168 "data_size": 65536 00:16:59.168 } 00:16:59.168 ] 00:16:59.168 }' 00:16:59.168 13:36:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:59.169 13:36:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:59.427 13:36:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:16:59.427 13:36:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:59.427 13:36:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:59.427 13:36:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:59.427 13:36:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:16:59.427 13:36:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:59.427 13:36:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:59.427 13:36:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.427 13:36:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:59.427 13:36:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:59.427 [2024-11-20 13:36:58.835543] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:59.427 13:36:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.427 13:36:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:59.427 "name": "Existed_Raid", 00:16:59.427 "aliases": [ 00:16:59.427 "6b7236de-5532-4568-9f25-b24d71391a7c" 00:16:59.427 ], 00:16:59.427 "product_name": "Raid Volume", 00:16:59.427 "block_size": 512, 00:16:59.427 "num_blocks": 65536, 00:16:59.427 "uuid": "6b7236de-5532-4568-9f25-b24d71391a7c", 00:16:59.427 "assigned_rate_limits": { 00:16:59.427 "rw_ios_per_sec": 0, 00:16:59.427 "rw_mbytes_per_sec": 0, 00:16:59.427 "r_mbytes_per_sec": 0, 00:16:59.427 "w_mbytes_per_sec": 0 00:16:59.427 }, 00:16:59.427 "claimed": false, 00:16:59.427 "zoned": false, 00:16:59.427 "supported_io_types": { 00:16:59.427 "read": true, 00:16:59.427 "write": true, 00:16:59.427 "unmap": false, 00:16:59.427 "flush": false, 00:16:59.427 "reset": true, 00:16:59.427 "nvme_admin": false, 00:16:59.427 "nvme_io": false, 00:16:59.427 "nvme_io_md": false, 00:16:59.427 "write_zeroes": true, 00:16:59.427 "zcopy": false, 00:16:59.427 "get_zone_info": false, 00:16:59.427 "zone_management": false, 00:16:59.427 "zone_append": false, 00:16:59.427 "compare": false, 00:16:59.427 "compare_and_write": false, 00:16:59.427 "abort": false, 00:16:59.427 "seek_hole": false, 00:16:59.427 "seek_data": false, 00:16:59.427 "copy": false, 00:16:59.427 "nvme_iov_md": false 00:16:59.427 }, 00:16:59.427 "memory_domains": [ 00:16:59.427 { 00:16:59.427 "dma_device_id": "system", 00:16:59.427 "dma_device_type": 1 00:16:59.427 }, 00:16:59.427 { 00:16:59.427 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:59.427 "dma_device_type": 2 00:16:59.427 }, 00:16:59.427 { 00:16:59.427 "dma_device_id": "system", 00:16:59.427 "dma_device_type": 1 00:16:59.427 }, 00:16:59.427 { 00:16:59.427 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:59.427 "dma_device_type": 2 00:16:59.427 }, 00:16:59.427 { 00:16:59.427 "dma_device_id": "system", 00:16:59.427 "dma_device_type": 1 00:16:59.427 }, 00:16:59.427 { 00:16:59.427 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:59.427 "dma_device_type": 2 00:16:59.427 }, 00:16:59.427 { 00:16:59.427 "dma_device_id": "system", 00:16:59.427 "dma_device_type": 1 00:16:59.427 }, 00:16:59.427 { 00:16:59.427 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:59.427 "dma_device_type": 2 00:16:59.427 } 00:16:59.427 ], 00:16:59.427 "driver_specific": { 00:16:59.427 "raid": { 00:16:59.427 "uuid": "6b7236de-5532-4568-9f25-b24d71391a7c", 00:16:59.427 "strip_size_kb": 0, 00:16:59.427 "state": "online", 00:16:59.427 "raid_level": "raid1", 00:16:59.427 "superblock": false, 00:16:59.427 "num_base_bdevs": 4, 00:16:59.427 "num_base_bdevs_discovered": 4, 00:16:59.427 "num_base_bdevs_operational": 4, 00:16:59.427 "base_bdevs_list": [ 00:16:59.427 { 00:16:59.427 "name": "NewBaseBdev", 00:16:59.427 "uuid": "98c22357-f42f-4392-a3e2-d83c6734e817", 00:16:59.427 "is_configured": true, 00:16:59.427 "data_offset": 0, 00:16:59.427 "data_size": 65536 00:16:59.427 }, 00:16:59.427 { 00:16:59.427 "name": "BaseBdev2", 00:16:59.427 "uuid": "b086694b-7970-40ca-9e11-37ff2ab4ad20", 00:16:59.427 "is_configured": true, 00:16:59.427 "data_offset": 0, 00:16:59.427 "data_size": 65536 00:16:59.427 }, 00:16:59.427 { 00:16:59.427 "name": "BaseBdev3", 00:16:59.427 "uuid": "12fc72b3-9318-45fe-9392-99e0e3aeebf7", 00:16:59.427 "is_configured": true, 00:16:59.427 "data_offset": 0, 00:16:59.427 "data_size": 65536 00:16:59.427 }, 00:16:59.427 { 00:16:59.427 "name": "BaseBdev4", 00:16:59.427 "uuid": "39821b4b-3b3d-4fdc-bcd7-3414992728d2", 00:16:59.427 "is_configured": true, 00:16:59.427 "data_offset": 0, 00:16:59.427 "data_size": 65536 00:16:59.427 } 00:16:59.427 ] 00:16:59.427 } 00:16:59.427 } 00:16:59.427 }' 00:16:59.427 13:36:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:59.427 13:36:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:16:59.427 BaseBdev2 00:16:59.428 BaseBdev3 00:16:59.428 BaseBdev4' 00:16:59.428 13:36:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:59.686 13:36:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:59.686 13:36:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:59.686 13:36:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:59.686 13:36:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:16:59.686 13:36:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.686 13:36:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:59.686 13:36:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.686 13:36:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:59.686 13:36:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:59.686 13:36:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:59.686 13:36:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:59.686 13:36:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:59.686 13:36:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.686 13:36:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:59.686 13:36:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.686 13:36:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:59.686 13:36:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:59.687 13:36:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:59.687 13:36:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:59.687 13:36:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:16:59.687 13:36:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.687 13:36:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:59.687 13:36:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.687 13:36:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:59.687 13:36:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:59.687 13:36:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:59.687 13:36:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:16:59.687 13:36:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.687 13:36:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:59.687 13:36:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:59.687 13:36:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.687 13:36:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:59.687 13:36:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:59.687 13:36:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:59.687 13:36:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.687 13:36:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:59.687 [2024-11-20 13:36:59.134757] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:59.687 [2024-11-20 13:36:59.134899] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:59.687 [2024-11-20 13:36:59.135009] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:59.687 [2024-11-20 13:36:59.135318] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:59.687 [2024-11-20 13:36:59.135336] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:16:59.687 13:36:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.687 13:36:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 72929 00:16:59.687 13:36:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 72929 ']' 00:16:59.687 13:36:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 72929 00:16:59.687 13:36:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:16:59.687 13:36:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:59.687 13:36:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72929 00:16:59.945 13:36:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:59.945 13:36:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:59.945 13:36:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72929' 00:16:59.945 killing process with pid 72929 00:16:59.945 13:36:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 72929 00:16:59.945 [2024-11-20 13:36:59.181963] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:59.945 13:36:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 72929 00:17:00.204 [2024-11-20 13:36:59.592960] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:01.579 ************************************ 00:17:01.579 END TEST raid_state_function_test 00:17:01.579 ************************************ 00:17:01.579 13:37:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:17:01.579 00:17:01.579 real 0m11.444s 00:17:01.579 user 0m18.160s 00:17:01.579 sys 0m2.186s 00:17:01.579 13:37:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:01.579 13:37:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:01.579 13:37:00 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 4 true 00:17:01.579 13:37:00 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:17:01.579 13:37:00 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:01.579 13:37:00 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:01.579 ************************************ 00:17:01.579 START TEST raid_state_function_test_sb 00:17:01.579 ************************************ 00:17:01.579 13:37:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 4 true 00:17:01.579 13:37:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:17:01.579 13:37:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:17:01.579 13:37:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:17:01.579 13:37:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:17:01.579 13:37:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:17:01.579 13:37:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:01.579 13:37:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:17:01.579 13:37:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:01.579 13:37:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:01.579 13:37:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:17:01.579 13:37:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:01.579 13:37:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:01.579 13:37:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:17:01.579 13:37:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:01.579 13:37:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:01.579 13:37:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:17:01.579 13:37:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:01.579 13:37:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:01.579 13:37:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:17:01.579 13:37:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:17:01.579 13:37:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:17:01.579 13:37:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:17:01.579 13:37:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:17:01.579 13:37:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:17:01.579 13:37:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:17:01.579 13:37:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:17:01.579 13:37:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:17:01.579 13:37:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:17:01.579 13:37:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=73595 00:17:01.579 Process raid pid: 73595 00:17:01.579 13:37:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 73595' 00:17:01.579 13:37:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:17:01.579 13:37:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 73595 00:17:01.579 13:37:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 73595 ']' 00:17:01.579 13:37:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:01.579 13:37:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:01.579 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:01.579 13:37:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:01.580 13:37:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:01.580 13:37:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:01.580 [2024-11-20 13:37:00.956588] Starting SPDK v25.01-pre git sha1 82b85d9ca / DPDK 24.03.0 initialization... 00:17:01.580 [2024-11-20 13:37:00.956729] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:01.838 [2024-11-20 13:37:01.139346] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:01.838 [2024-11-20 13:37:01.260791] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:02.096 [2024-11-20 13:37:01.482079] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:02.096 [2024-11-20 13:37:01.482130] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:02.355 13:37:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:02.355 13:37:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:17:02.355 13:37:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:17:02.355 13:37:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.355 13:37:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:02.355 [2024-11-20 13:37:01.811712] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:02.355 [2024-11-20 13:37:01.811771] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:02.355 [2024-11-20 13:37:01.811784] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:02.355 [2024-11-20 13:37:01.811797] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:02.355 [2024-11-20 13:37:01.811806] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:02.355 [2024-11-20 13:37:01.811818] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:02.355 [2024-11-20 13:37:01.811826] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:02.355 [2024-11-20 13:37:01.811839] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:02.355 13:37:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.355 13:37:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:17:02.355 13:37:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:02.355 13:37:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:02.355 13:37:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:02.355 13:37:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:02.355 13:37:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:02.355 13:37:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:02.355 13:37:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:02.355 13:37:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:02.355 13:37:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:02.355 13:37:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:02.355 13:37:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:02.355 13:37:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.355 13:37:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:02.614 13:37:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.614 13:37:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:02.614 "name": "Existed_Raid", 00:17:02.614 "uuid": "ce4fa494-f1d4-498c-b392-4133afd12ad7", 00:17:02.614 "strip_size_kb": 0, 00:17:02.614 "state": "configuring", 00:17:02.614 "raid_level": "raid1", 00:17:02.614 "superblock": true, 00:17:02.614 "num_base_bdevs": 4, 00:17:02.614 "num_base_bdevs_discovered": 0, 00:17:02.614 "num_base_bdevs_operational": 4, 00:17:02.614 "base_bdevs_list": [ 00:17:02.614 { 00:17:02.614 "name": "BaseBdev1", 00:17:02.614 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:02.614 "is_configured": false, 00:17:02.614 "data_offset": 0, 00:17:02.614 "data_size": 0 00:17:02.614 }, 00:17:02.614 { 00:17:02.614 "name": "BaseBdev2", 00:17:02.614 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:02.614 "is_configured": false, 00:17:02.614 "data_offset": 0, 00:17:02.614 "data_size": 0 00:17:02.614 }, 00:17:02.614 { 00:17:02.614 "name": "BaseBdev3", 00:17:02.614 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:02.614 "is_configured": false, 00:17:02.614 "data_offset": 0, 00:17:02.614 "data_size": 0 00:17:02.614 }, 00:17:02.614 { 00:17:02.614 "name": "BaseBdev4", 00:17:02.614 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:02.614 "is_configured": false, 00:17:02.614 "data_offset": 0, 00:17:02.614 "data_size": 0 00:17:02.614 } 00:17:02.614 ] 00:17:02.614 }' 00:17:02.614 13:37:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:02.614 13:37:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:02.873 13:37:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:02.873 13:37:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.873 13:37:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:02.873 [2024-11-20 13:37:02.203151] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:02.873 [2024-11-20 13:37:02.203217] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:17:02.873 13:37:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.873 13:37:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:17:02.873 13:37:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.873 13:37:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:02.873 [2024-11-20 13:37:02.211134] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:02.873 [2024-11-20 13:37:02.211193] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:02.873 [2024-11-20 13:37:02.211221] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:02.873 [2024-11-20 13:37:02.211235] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:02.873 [2024-11-20 13:37:02.211243] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:02.873 [2024-11-20 13:37:02.211256] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:02.873 [2024-11-20 13:37:02.211265] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:02.873 [2024-11-20 13:37:02.211277] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:02.873 13:37:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.873 13:37:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:17:02.873 13:37:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.873 13:37:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:02.873 [2024-11-20 13:37:02.258235] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:02.873 BaseBdev1 00:17:02.873 13:37:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.873 13:37:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:17:02.873 13:37:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:17:02.873 13:37:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:02.873 13:37:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:17:02.873 13:37:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:02.873 13:37:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:02.873 13:37:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:02.873 13:37:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.873 13:37:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:02.873 13:37:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.873 13:37:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:02.873 13:37:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.873 13:37:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:02.873 [ 00:17:02.873 { 00:17:02.873 "name": "BaseBdev1", 00:17:02.873 "aliases": [ 00:17:02.873 "223402cc-f256-4d4d-9268-19bb55c0f05f" 00:17:02.873 ], 00:17:02.873 "product_name": "Malloc disk", 00:17:02.873 "block_size": 512, 00:17:02.873 "num_blocks": 65536, 00:17:02.873 "uuid": "223402cc-f256-4d4d-9268-19bb55c0f05f", 00:17:02.873 "assigned_rate_limits": { 00:17:02.873 "rw_ios_per_sec": 0, 00:17:02.873 "rw_mbytes_per_sec": 0, 00:17:02.873 "r_mbytes_per_sec": 0, 00:17:02.873 "w_mbytes_per_sec": 0 00:17:02.873 }, 00:17:02.873 "claimed": true, 00:17:02.873 "claim_type": "exclusive_write", 00:17:02.873 "zoned": false, 00:17:02.873 "supported_io_types": { 00:17:02.873 "read": true, 00:17:02.873 "write": true, 00:17:02.873 "unmap": true, 00:17:02.873 "flush": true, 00:17:02.873 "reset": true, 00:17:02.873 "nvme_admin": false, 00:17:02.873 "nvme_io": false, 00:17:02.873 "nvme_io_md": false, 00:17:02.873 "write_zeroes": true, 00:17:02.873 "zcopy": true, 00:17:02.873 "get_zone_info": false, 00:17:02.873 "zone_management": false, 00:17:02.873 "zone_append": false, 00:17:02.873 "compare": false, 00:17:02.873 "compare_and_write": false, 00:17:02.873 "abort": true, 00:17:02.873 "seek_hole": false, 00:17:02.873 "seek_data": false, 00:17:02.873 "copy": true, 00:17:02.873 "nvme_iov_md": false 00:17:02.873 }, 00:17:02.873 "memory_domains": [ 00:17:02.873 { 00:17:02.873 "dma_device_id": "system", 00:17:02.873 "dma_device_type": 1 00:17:02.873 }, 00:17:02.873 { 00:17:02.873 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:02.873 "dma_device_type": 2 00:17:02.873 } 00:17:02.873 ], 00:17:02.873 "driver_specific": {} 00:17:02.873 } 00:17:02.873 ] 00:17:02.873 13:37:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.873 13:37:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:17:02.873 13:37:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:17:02.873 13:37:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:02.873 13:37:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:02.873 13:37:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:02.873 13:37:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:02.874 13:37:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:02.874 13:37:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:02.874 13:37:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:02.874 13:37:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:02.874 13:37:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:02.874 13:37:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:02.874 13:37:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.874 13:37:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:02.874 13:37:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:02.874 13:37:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.874 13:37:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:02.874 "name": "Existed_Raid", 00:17:02.874 "uuid": "3b4b9e47-56f3-4303-9a39-66fd63fc0313", 00:17:02.874 "strip_size_kb": 0, 00:17:02.874 "state": "configuring", 00:17:02.874 "raid_level": "raid1", 00:17:02.874 "superblock": true, 00:17:02.874 "num_base_bdevs": 4, 00:17:02.874 "num_base_bdevs_discovered": 1, 00:17:02.874 "num_base_bdevs_operational": 4, 00:17:02.874 "base_bdevs_list": [ 00:17:02.874 { 00:17:02.874 "name": "BaseBdev1", 00:17:02.874 "uuid": "223402cc-f256-4d4d-9268-19bb55c0f05f", 00:17:02.874 "is_configured": true, 00:17:02.874 "data_offset": 2048, 00:17:02.874 "data_size": 63488 00:17:02.874 }, 00:17:02.874 { 00:17:02.874 "name": "BaseBdev2", 00:17:02.874 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:02.874 "is_configured": false, 00:17:02.874 "data_offset": 0, 00:17:02.874 "data_size": 0 00:17:02.874 }, 00:17:02.874 { 00:17:02.874 "name": "BaseBdev3", 00:17:02.874 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:02.874 "is_configured": false, 00:17:02.874 "data_offset": 0, 00:17:02.874 "data_size": 0 00:17:02.874 }, 00:17:02.874 { 00:17:02.874 "name": "BaseBdev4", 00:17:02.874 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:02.874 "is_configured": false, 00:17:02.874 "data_offset": 0, 00:17:02.874 "data_size": 0 00:17:02.874 } 00:17:02.874 ] 00:17:02.874 }' 00:17:02.874 13:37:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:02.874 13:37:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:03.442 13:37:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:03.442 13:37:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.442 13:37:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:03.442 [2024-11-20 13:37:02.697716] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:03.442 [2024-11-20 13:37:02.697778] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:17:03.442 13:37:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.442 13:37:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:17:03.442 13:37:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.442 13:37:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:03.442 [2024-11-20 13:37:02.705758] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:03.442 [2024-11-20 13:37:02.707884] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:03.442 [2024-11-20 13:37:02.707933] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:03.442 [2024-11-20 13:37:02.707945] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:03.442 [2024-11-20 13:37:02.707960] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:03.442 [2024-11-20 13:37:02.707968] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:03.442 [2024-11-20 13:37:02.707980] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:03.442 13:37:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.442 13:37:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:17:03.442 13:37:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:03.442 13:37:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:17:03.442 13:37:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:03.442 13:37:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:03.442 13:37:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:03.442 13:37:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:03.442 13:37:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:03.442 13:37:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:03.442 13:37:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:03.442 13:37:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:03.442 13:37:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:03.442 13:37:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:03.442 13:37:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.442 13:37:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:03.442 13:37:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:03.442 13:37:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.442 13:37:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:03.442 "name": "Existed_Raid", 00:17:03.442 "uuid": "d3519d12-8abb-42e8-bb1c-65ce35a5d55a", 00:17:03.442 "strip_size_kb": 0, 00:17:03.442 "state": "configuring", 00:17:03.442 "raid_level": "raid1", 00:17:03.442 "superblock": true, 00:17:03.442 "num_base_bdevs": 4, 00:17:03.442 "num_base_bdevs_discovered": 1, 00:17:03.442 "num_base_bdevs_operational": 4, 00:17:03.442 "base_bdevs_list": [ 00:17:03.442 { 00:17:03.442 "name": "BaseBdev1", 00:17:03.442 "uuid": "223402cc-f256-4d4d-9268-19bb55c0f05f", 00:17:03.442 "is_configured": true, 00:17:03.442 "data_offset": 2048, 00:17:03.442 "data_size": 63488 00:17:03.442 }, 00:17:03.442 { 00:17:03.442 "name": "BaseBdev2", 00:17:03.442 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:03.442 "is_configured": false, 00:17:03.442 "data_offset": 0, 00:17:03.442 "data_size": 0 00:17:03.442 }, 00:17:03.442 { 00:17:03.442 "name": "BaseBdev3", 00:17:03.442 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:03.442 "is_configured": false, 00:17:03.442 "data_offset": 0, 00:17:03.442 "data_size": 0 00:17:03.442 }, 00:17:03.442 { 00:17:03.442 "name": "BaseBdev4", 00:17:03.442 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:03.442 "is_configured": false, 00:17:03.442 "data_offset": 0, 00:17:03.442 "data_size": 0 00:17:03.442 } 00:17:03.442 ] 00:17:03.442 }' 00:17:03.442 13:37:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:03.442 13:37:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:03.703 13:37:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:17:03.703 13:37:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.703 13:37:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:03.703 [2024-11-20 13:37:03.124017] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:03.703 BaseBdev2 00:17:03.703 13:37:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.703 13:37:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:17:03.703 13:37:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:17:03.703 13:37:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:03.703 13:37:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:17:03.703 13:37:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:03.703 13:37:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:03.703 13:37:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:03.703 13:37:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.703 13:37:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:03.703 13:37:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.703 13:37:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:03.703 13:37:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.703 13:37:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:03.703 [ 00:17:03.703 { 00:17:03.703 "name": "BaseBdev2", 00:17:03.703 "aliases": [ 00:17:03.703 "0d0b719f-4327-461e-bf93-62a4f6d54c43" 00:17:03.703 ], 00:17:03.703 "product_name": "Malloc disk", 00:17:03.703 "block_size": 512, 00:17:03.703 "num_blocks": 65536, 00:17:03.703 "uuid": "0d0b719f-4327-461e-bf93-62a4f6d54c43", 00:17:03.703 "assigned_rate_limits": { 00:17:03.703 "rw_ios_per_sec": 0, 00:17:03.703 "rw_mbytes_per_sec": 0, 00:17:03.703 "r_mbytes_per_sec": 0, 00:17:03.703 "w_mbytes_per_sec": 0 00:17:03.703 }, 00:17:03.703 "claimed": true, 00:17:03.703 "claim_type": "exclusive_write", 00:17:03.703 "zoned": false, 00:17:03.703 "supported_io_types": { 00:17:03.703 "read": true, 00:17:03.703 "write": true, 00:17:03.703 "unmap": true, 00:17:03.703 "flush": true, 00:17:03.703 "reset": true, 00:17:03.703 "nvme_admin": false, 00:17:03.703 "nvme_io": false, 00:17:03.703 "nvme_io_md": false, 00:17:03.703 "write_zeroes": true, 00:17:03.703 "zcopy": true, 00:17:03.703 "get_zone_info": false, 00:17:03.703 "zone_management": false, 00:17:03.703 "zone_append": false, 00:17:03.703 "compare": false, 00:17:03.703 "compare_and_write": false, 00:17:03.703 "abort": true, 00:17:03.703 "seek_hole": false, 00:17:03.703 "seek_data": false, 00:17:03.703 "copy": true, 00:17:03.703 "nvme_iov_md": false 00:17:03.703 }, 00:17:03.703 "memory_domains": [ 00:17:03.703 { 00:17:03.703 "dma_device_id": "system", 00:17:03.703 "dma_device_type": 1 00:17:03.703 }, 00:17:03.703 { 00:17:03.703 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:03.703 "dma_device_type": 2 00:17:03.703 } 00:17:03.703 ], 00:17:03.703 "driver_specific": {} 00:17:03.703 } 00:17:03.703 ] 00:17:03.703 13:37:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.703 13:37:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:17:03.703 13:37:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:17:03.703 13:37:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:03.703 13:37:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:17:03.703 13:37:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:03.703 13:37:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:03.703 13:37:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:03.703 13:37:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:03.703 13:37:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:03.703 13:37:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:03.703 13:37:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:03.703 13:37:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:03.703 13:37:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:03.703 13:37:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:03.703 13:37:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:03.703 13:37:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.703 13:37:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:03.962 13:37:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.962 13:37:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:03.962 "name": "Existed_Raid", 00:17:03.962 "uuid": "d3519d12-8abb-42e8-bb1c-65ce35a5d55a", 00:17:03.962 "strip_size_kb": 0, 00:17:03.962 "state": "configuring", 00:17:03.962 "raid_level": "raid1", 00:17:03.962 "superblock": true, 00:17:03.962 "num_base_bdevs": 4, 00:17:03.962 "num_base_bdevs_discovered": 2, 00:17:03.962 "num_base_bdevs_operational": 4, 00:17:03.962 "base_bdevs_list": [ 00:17:03.962 { 00:17:03.962 "name": "BaseBdev1", 00:17:03.962 "uuid": "223402cc-f256-4d4d-9268-19bb55c0f05f", 00:17:03.962 "is_configured": true, 00:17:03.962 "data_offset": 2048, 00:17:03.962 "data_size": 63488 00:17:03.962 }, 00:17:03.962 { 00:17:03.963 "name": "BaseBdev2", 00:17:03.963 "uuid": "0d0b719f-4327-461e-bf93-62a4f6d54c43", 00:17:03.963 "is_configured": true, 00:17:03.963 "data_offset": 2048, 00:17:03.963 "data_size": 63488 00:17:03.963 }, 00:17:03.963 { 00:17:03.963 "name": "BaseBdev3", 00:17:03.963 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:03.963 "is_configured": false, 00:17:03.963 "data_offset": 0, 00:17:03.963 "data_size": 0 00:17:03.963 }, 00:17:03.963 { 00:17:03.963 "name": "BaseBdev4", 00:17:03.963 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:03.963 "is_configured": false, 00:17:03.963 "data_offset": 0, 00:17:03.963 "data_size": 0 00:17:03.963 } 00:17:03.963 ] 00:17:03.963 }' 00:17:03.963 13:37:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:03.963 13:37:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:04.221 13:37:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:17:04.221 13:37:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.221 13:37:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:04.222 [2024-11-20 13:37:03.642411] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:04.222 BaseBdev3 00:17:04.222 13:37:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.222 13:37:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:17:04.222 13:37:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:17:04.222 13:37:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:04.222 13:37:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:17:04.222 13:37:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:04.222 13:37:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:04.222 13:37:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:04.222 13:37:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.222 13:37:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:04.222 13:37:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.222 13:37:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:04.222 13:37:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.222 13:37:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:04.222 [ 00:17:04.222 { 00:17:04.222 "name": "BaseBdev3", 00:17:04.222 "aliases": [ 00:17:04.222 "feba2620-9b9c-4a66-94c9-30f45709bc44" 00:17:04.222 ], 00:17:04.222 "product_name": "Malloc disk", 00:17:04.222 "block_size": 512, 00:17:04.222 "num_blocks": 65536, 00:17:04.222 "uuid": "feba2620-9b9c-4a66-94c9-30f45709bc44", 00:17:04.222 "assigned_rate_limits": { 00:17:04.222 "rw_ios_per_sec": 0, 00:17:04.222 "rw_mbytes_per_sec": 0, 00:17:04.222 "r_mbytes_per_sec": 0, 00:17:04.222 "w_mbytes_per_sec": 0 00:17:04.222 }, 00:17:04.222 "claimed": true, 00:17:04.222 "claim_type": "exclusive_write", 00:17:04.222 "zoned": false, 00:17:04.222 "supported_io_types": { 00:17:04.222 "read": true, 00:17:04.222 "write": true, 00:17:04.222 "unmap": true, 00:17:04.222 "flush": true, 00:17:04.222 "reset": true, 00:17:04.222 "nvme_admin": false, 00:17:04.222 "nvme_io": false, 00:17:04.222 "nvme_io_md": false, 00:17:04.222 "write_zeroes": true, 00:17:04.222 "zcopy": true, 00:17:04.222 "get_zone_info": false, 00:17:04.222 "zone_management": false, 00:17:04.222 "zone_append": false, 00:17:04.222 "compare": false, 00:17:04.222 "compare_and_write": false, 00:17:04.222 "abort": true, 00:17:04.222 "seek_hole": false, 00:17:04.222 "seek_data": false, 00:17:04.222 "copy": true, 00:17:04.222 "nvme_iov_md": false 00:17:04.222 }, 00:17:04.222 "memory_domains": [ 00:17:04.222 { 00:17:04.222 "dma_device_id": "system", 00:17:04.222 "dma_device_type": 1 00:17:04.222 }, 00:17:04.222 { 00:17:04.222 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:04.222 "dma_device_type": 2 00:17:04.222 } 00:17:04.222 ], 00:17:04.222 "driver_specific": {} 00:17:04.222 } 00:17:04.222 ] 00:17:04.222 13:37:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.222 13:37:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:17:04.222 13:37:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:17:04.222 13:37:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:04.222 13:37:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:17:04.222 13:37:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:04.222 13:37:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:04.222 13:37:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:04.222 13:37:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:04.222 13:37:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:04.222 13:37:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:04.222 13:37:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:04.222 13:37:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:04.222 13:37:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:04.222 13:37:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:04.222 13:37:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.222 13:37:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:04.222 13:37:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:04.481 13:37:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.481 13:37:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:04.481 "name": "Existed_Raid", 00:17:04.481 "uuid": "d3519d12-8abb-42e8-bb1c-65ce35a5d55a", 00:17:04.481 "strip_size_kb": 0, 00:17:04.481 "state": "configuring", 00:17:04.481 "raid_level": "raid1", 00:17:04.481 "superblock": true, 00:17:04.481 "num_base_bdevs": 4, 00:17:04.481 "num_base_bdevs_discovered": 3, 00:17:04.481 "num_base_bdevs_operational": 4, 00:17:04.481 "base_bdevs_list": [ 00:17:04.481 { 00:17:04.481 "name": "BaseBdev1", 00:17:04.481 "uuid": "223402cc-f256-4d4d-9268-19bb55c0f05f", 00:17:04.481 "is_configured": true, 00:17:04.481 "data_offset": 2048, 00:17:04.481 "data_size": 63488 00:17:04.481 }, 00:17:04.481 { 00:17:04.481 "name": "BaseBdev2", 00:17:04.481 "uuid": "0d0b719f-4327-461e-bf93-62a4f6d54c43", 00:17:04.481 "is_configured": true, 00:17:04.481 "data_offset": 2048, 00:17:04.481 "data_size": 63488 00:17:04.481 }, 00:17:04.481 { 00:17:04.481 "name": "BaseBdev3", 00:17:04.481 "uuid": "feba2620-9b9c-4a66-94c9-30f45709bc44", 00:17:04.481 "is_configured": true, 00:17:04.481 "data_offset": 2048, 00:17:04.481 "data_size": 63488 00:17:04.481 }, 00:17:04.481 { 00:17:04.481 "name": "BaseBdev4", 00:17:04.481 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:04.481 "is_configured": false, 00:17:04.481 "data_offset": 0, 00:17:04.481 "data_size": 0 00:17:04.481 } 00:17:04.481 ] 00:17:04.481 }' 00:17:04.481 13:37:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:04.481 13:37:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:04.741 13:37:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:17:04.741 13:37:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.741 13:37:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:04.741 [2024-11-20 13:37:04.100678] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:04.741 [2024-11-20 13:37:04.100951] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:17:04.741 [2024-11-20 13:37:04.100967] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:17:04.741 [2024-11-20 13:37:04.101282] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:17:04.741 [2024-11-20 13:37:04.101441] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:17:04.741 [2024-11-20 13:37:04.101456] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:17:04.741 [2024-11-20 13:37:04.101591] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:04.741 BaseBdev4 00:17:04.741 13:37:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.741 13:37:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:17:04.741 13:37:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:17:04.741 13:37:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:04.741 13:37:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:17:04.741 13:37:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:04.741 13:37:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:04.741 13:37:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:04.741 13:37:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.741 13:37:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:04.741 13:37:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.741 13:37:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:17:04.741 13:37:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.741 13:37:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:04.741 [ 00:17:04.741 { 00:17:04.741 "name": "BaseBdev4", 00:17:04.741 "aliases": [ 00:17:04.741 "e40b7012-3aef-47e7-9072-f2abb0c0877f" 00:17:04.741 ], 00:17:04.741 "product_name": "Malloc disk", 00:17:04.741 "block_size": 512, 00:17:04.741 "num_blocks": 65536, 00:17:04.741 "uuid": "e40b7012-3aef-47e7-9072-f2abb0c0877f", 00:17:04.741 "assigned_rate_limits": { 00:17:04.741 "rw_ios_per_sec": 0, 00:17:04.741 "rw_mbytes_per_sec": 0, 00:17:04.741 "r_mbytes_per_sec": 0, 00:17:04.741 "w_mbytes_per_sec": 0 00:17:04.741 }, 00:17:04.741 "claimed": true, 00:17:04.741 "claim_type": "exclusive_write", 00:17:04.741 "zoned": false, 00:17:04.741 "supported_io_types": { 00:17:04.741 "read": true, 00:17:04.741 "write": true, 00:17:04.741 "unmap": true, 00:17:04.741 "flush": true, 00:17:04.741 "reset": true, 00:17:04.741 "nvme_admin": false, 00:17:04.741 "nvme_io": false, 00:17:04.741 "nvme_io_md": false, 00:17:04.741 "write_zeroes": true, 00:17:04.741 "zcopy": true, 00:17:04.741 "get_zone_info": false, 00:17:04.741 "zone_management": false, 00:17:04.741 "zone_append": false, 00:17:04.741 "compare": false, 00:17:04.741 "compare_and_write": false, 00:17:04.741 "abort": true, 00:17:04.741 "seek_hole": false, 00:17:04.741 "seek_data": false, 00:17:04.741 "copy": true, 00:17:04.741 "nvme_iov_md": false 00:17:04.741 }, 00:17:04.741 "memory_domains": [ 00:17:04.741 { 00:17:04.741 "dma_device_id": "system", 00:17:04.741 "dma_device_type": 1 00:17:04.741 }, 00:17:04.741 { 00:17:04.741 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:04.741 "dma_device_type": 2 00:17:04.741 } 00:17:04.741 ], 00:17:04.741 "driver_specific": {} 00:17:04.741 } 00:17:04.741 ] 00:17:04.741 13:37:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.741 13:37:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:17:04.741 13:37:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:17:04.741 13:37:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:04.741 13:37:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:17:04.741 13:37:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:04.741 13:37:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:04.741 13:37:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:04.741 13:37:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:04.741 13:37:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:04.741 13:37:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:04.741 13:37:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:04.741 13:37:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:04.741 13:37:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:04.741 13:37:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:04.741 13:37:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:04.742 13:37:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.742 13:37:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:04.742 13:37:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.742 13:37:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:04.742 "name": "Existed_Raid", 00:17:04.742 "uuid": "d3519d12-8abb-42e8-bb1c-65ce35a5d55a", 00:17:04.742 "strip_size_kb": 0, 00:17:04.742 "state": "online", 00:17:04.742 "raid_level": "raid1", 00:17:04.742 "superblock": true, 00:17:04.742 "num_base_bdevs": 4, 00:17:04.742 "num_base_bdevs_discovered": 4, 00:17:04.742 "num_base_bdevs_operational": 4, 00:17:04.742 "base_bdevs_list": [ 00:17:04.742 { 00:17:04.742 "name": "BaseBdev1", 00:17:04.742 "uuid": "223402cc-f256-4d4d-9268-19bb55c0f05f", 00:17:04.742 "is_configured": true, 00:17:04.742 "data_offset": 2048, 00:17:04.742 "data_size": 63488 00:17:04.742 }, 00:17:04.742 { 00:17:04.742 "name": "BaseBdev2", 00:17:04.742 "uuid": "0d0b719f-4327-461e-bf93-62a4f6d54c43", 00:17:04.742 "is_configured": true, 00:17:04.742 "data_offset": 2048, 00:17:04.742 "data_size": 63488 00:17:04.742 }, 00:17:04.742 { 00:17:04.742 "name": "BaseBdev3", 00:17:04.742 "uuid": "feba2620-9b9c-4a66-94c9-30f45709bc44", 00:17:04.742 "is_configured": true, 00:17:04.742 "data_offset": 2048, 00:17:04.742 "data_size": 63488 00:17:04.742 }, 00:17:04.742 { 00:17:04.742 "name": "BaseBdev4", 00:17:04.742 "uuid": "e40b7012-3aef-47e7-9072-f2abb0c0877f", 00:17:04.742 "is_configured": true, 00:17:04.742 "data_offset": 2048, 00:17:04.742 "data_size": 63488 00:17:04.742 } 00:17:04.742 ] 00:17:04.742 }' 00:17:04.742 13:37:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:04.742 13:37:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:05.310 13:37:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:17:05.310 13:37:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:17:05.310 13:37:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:05.310 13:37:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:05.310 13:37:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:17:05.310 13:37:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:05.310 13:37:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:17:05.310 13:37:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:05.310 13:37:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.310 13:37:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:05.310 [2024-11-20 13:37:04.580419] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:05.310 13:37:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.310 13:37:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:05.310 "name": "Existed_Raid", 00:17:05.310 "aliases": [ 00:17:05.310 "d3519d12-8abb-42e8-bb1c-65ce35a5d55a" 00:17:05.310 ], 00:17:05.310 "product_name": "Raid Volume", 00:17:05.310 "block_size": 512, 00:17:05.310 "num_blocks": 63488, 00:17:05.310 "uuid": "d3519d12-8abb-42e8-bb1c-65ce35a5d55a", 00:17:05.310 "assigned_rate_limits": { 00:17:05.310 "rw_ios_per_sec": 0, 00:17:05.310 "rw_mbytes_per_sec": 0, 00:17:05.310 "r_mbytes_per_sec": 0, 00:17:05.310 "w_mbytes_per_sec": 0 00:17:05.310 }, 00:17:05.310 "claimed": false, 00:17:05.310 "zoned": false, 00:17:05.310 "supported_io_types": { 00:17:05.310 "read": true, 00:17:05.310 "write": true, 00:17:05.310 "unmap": false, 00:17:05.310 "flush": false, 00:17:05.310 "reset": true, 00:17:05.310 "nvme_admin": false, 00:17:05.310 "nvme_io": false, 00:17:05.310 "nvme_io_md": false, 00:17:05.310 "write_zeroes": true, 00:17:05.310 "zcopy": false, 00:17:05.310 "get_zone_info": false, 00:17:05.310 "zone_management": false, 00:17:05.310 "zone_append": false, 00:17:05.310 "compare": false, 00:17:05.310 "compare_and_write": false, 00:17:05.310 "abort": false, 00:17:05.310 "seek_hole": false, 00:17:05.310 "seek_data": false, 00:17:05.310 "copy": false, 00:17:05.310 "nvme_iov_md": false 00:17:05.310 }, 00:17:05.310 "memory_domains": [ 00:17:05.310 { 00:17:05.310 "dma_device_id": "system", 00:17:05.310 "dma_device_type": 1 00:17:05.310 }, 00:17:05.310 { 00:17:05.310 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:05.310 "dma_device_type": 2 00:17:05.310 }, 00:17:05.310 { 00:17:05.310 "dma_device_id": "system", 00:17:05.310 "dma_device_type": 1 00:17:05.310 }, 00:17:05.310 { 00:17:05.310 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:05.310 "dma_device_type": 2 00:17:05.310 }, 00:17:05.310 { 00:17:05.310 "dma_device_id": "system", 00:17:05.310 "dma_device_type": 1 00:17:05.310 }, 00:17:05.310 { 00:17:05.310 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:05.310 "dma_device_type": 2 00:17:05.310 }, 00:17:05.310 { 00:17:05.310 "dma_device_id": "system", 00:17:05.310 "dma_device_type": 1 00:17:05.310 }, 00:17:05.310 { 00:17:05.310 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:05.310 "dma_device_type": 2 00:17:05.310 } 00:17:05.310 ], 00:17:05.310 "driver_specific": { 00:17:05.310 "raid": { 00:17:05.310 "uuid": "d3519d12-8abb-42e8-bb1c-65ce35a5d55a", 00:17:05.310 "strip_size_kb": 0, 00:17:05.310 "state": "online", 00:17:05.310 "raid_level": "raid1", 00:17:05.310 "superblock": true, 00:17:05.310 "num_base_bdevs": 4, 00:17:05.310 "num_base_bdevs_discovered": 4, 00:17:05.310 "num_base_bdevs_operational": 4, 00:17:05.310 "base_bdevs_list": [ 00:17:05.310 { 00:17:05.310 "name": "BaseBdev1", 00:17:05.310 "uuid": "223402cc-f256-4d4d-9268-19bb55c0f05f", 00:17:05.310 "is_configured": true, 00:17:05.310 "data_offset": 2048, 00:17:05.310 "data_size": 63488 00:17:05.310 }, 00:17:05.310 { 00:17:05.310 "name": "BaseBdev2", 00:17:05.310 "uuid": "0d0b719f-4327-461e-bf93-62a4f6d54c43", 00:17:05.310 "is_configured": true, 00:17:05.310 "data_offset": 2048, 00:17:05.310 "data_size": 63488 00:17:05.310 }, 00:17:05.310 { 00:17:05.310 "name": "BaseBdev3", 00:17:05.310 "uuid": "feba2620-9b9c-4a66-94c9-30f45709bc44", 00:17:05.310 "is_configured": true, 00:17:05.310 "data_offset": 2048, 00:17:05.310 "data_size": 63488 00:17:05.310 }, 00:17:05.310 { 00:17:05.310 "name": "BaseBdev4", 00:17:05.310 "uuid": "e40b7012-3aef-47e7-9072-f2abb0c0877f", 00:17:05.310 "is_configured": true, 00:17:05.310 "data_offset": 2048, 00:17:05.310 "data_size": 63488 00:17:05.310 } 00:17:05.310 ] 00:17:05.310 } 00:17:05.310 } 00:17:05.310 }' 00:17:05.311 13:37:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:05.311 13:37:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:17:05.311 BaseBdev2 00:17:05.311 BaseBdev3 00:17:05.311 BaseBdev4' 00:17:05.311 13:37:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:05.311 13:37:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:17:05.311 13:37:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:05.311 13:37:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:17:05.311 13:37:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.311 13:37:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:05.311 13:37:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:05.311 13:37:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.311 13:37:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:05.311 13:37:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:05.311 13:37:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:05.311 13:37:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:17:05.311 13:37:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:05.311 13:37:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.311 13:37:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:05.311 13:37:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.311 13:37:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:05.311 13:37:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:05.311 13:37:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:05.311 13:37:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:17:05.311 13:37:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.311 13:37:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:05.311 13:37:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:05.570 13:37:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.570 13:37:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:05.570 13:37:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:05.570 13:37:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:05.570 13:37:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:05.570 13:37:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:17:05.570 13:37:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.570 13:37:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:05.570 13:37:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.570 13:37:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:05.570 13:37:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:05.570 13:37:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:17:05.570 13:37:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.570 13:37:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:05.570 [2024-11-20 13:37:04.851738] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:05.570 13:37:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.570 13:37:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:17:05.571 13:37:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:17:05.571 13:37:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:05.571 13:37:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:17:05.571 13:37:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:17:05.571 13:37:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:17:05.571 13:37:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:05.571 13:37:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:05.571 13:37:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:05.571 13:37:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:05.571 13:37:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:05.571 13:37:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:05.571 13:37:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:05.571 13:37:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:05.571 13:37:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:05.571 13:37:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:05.571 13:37:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:05.571 13:37:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.571 13:37:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:05.571 13:37:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.571 13:37:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:05.571 "name": "Existed_Raid", 00:17:05.571 "uuid": "d3519d12-8abb-42e8-bb1c-65ce35a5d55a", 00:17:05.571 "strip_size_kb": 0, 00:17:05.571 "state": "online", 00:17:05.571 "raid_level": "raid1", 00:17:05.571 "superblock": true, 00:17:05.571 "num_base_bdevs": 4, 00:17:05.571 "num_base_bdevs_discovered": 3, 00:17:05.571 "num_base_bdevs_operational": 3, 00:17:05.571 "base_bdevs_list": [ 00:17:05.571 { 00:17:05.571 "name": null, 00:17:05.571 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:05.571 "is_configured": false, 00:17:05.571 "data_offset": 0, 00:17:05.571 "data_size": 63488 00:17:05.571 }, 00:17:05.571 { 00:17:05.571 "name": "BaseBdev2", 00:17:05.571 "uuid": "0d0b719f-4327-461e-bf93-62a4f6d54c43", 00:17:05.571 "is_configured": true, 00:17:05.571 "data_offset": 2048, 00:17:05.571 "data_size": 63488 00:17:05.571 }, 00:17:05.571 { 00:17:05.571 "name": "BaseBdev3", 00:17:05.571 "uuid": "feba2620-9b9c-4a66-94c9-30f45709bc44", 00:17:05.571 "is_configured": true, 00:17:05.571 "data_offset": 2048, 00:17:05.571 "data_size": 63488 00:17:05.571 }, 00:17:05.571 { 00:17:05.571 "name": "BaseBdev4", 00:17:05.571 "uuid": "e40b7012-3aef-47e7-9072-f2abb0c0877f", 00:17:05.571 "is_configured": true, 00:17:05.571 "data_offset": 2048, 00:17:05.571 "data_size": 63488 00:17:05.571 } 00:17:05.571 ] 00:17:05.571 }' 00:17:05.571 13:37:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:05.571 13:37:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:06.139 13:37:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:17:06.139 13:37:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:06.139 13:37:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:06.139 13:37:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:17:06.139 13:37:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.139 13:37:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:06.139 13:37:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.139 13:37:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:17:06.139 13:37:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:06.139 13:37:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:17:06.139 13:37:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.139 13:37:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:06.139 [2024-11-20 13:37:05.430459] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:06.139 13:37:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.139 13:37:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:17:06.139 13:37:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:06.139 13:37:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:17:06.139 13:37:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:06.139 13:37:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.139 13:37:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:06.139 13:37:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.139 13:37:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:17:06.139 13:37:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:06.139 13:37:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:17:06.139 13:37:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.139 13:37:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:06.139 [2024-11-20 13:37:05.570166] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:06.398 13:37:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.398 13:37:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:17:06.398 13:37:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:06.398 13:37:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:06.398 13:37:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:17:06.398 13:37:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.398 13:37:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:06.398 13:37:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.398 13:37:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:17:06.398 13:37:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:06.398 13:37:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:17:06.398 13:37:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.398 13:37:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:06.398 [2024-11-20 13:37:05.722958] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:17:06.398 [2024-11-20 13:37:05.723078] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:06.398 [2024-11-20 13:37:05.820815] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:06.398 [2024-11-20 13:37:05.820888] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:06.398 [2024-11-20 13:37:05.820906] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:17:06.398 13:37:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.398 13:37:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:17:06.398 13:37:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:06.398 13:37:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:17:06.398 13:37:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:06.398 13:37:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.398 13:37:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:06.398 13:37:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.398 13:37:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:17:06.398 13:37:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:17:06.398 13:37:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:17:06.398 13:37:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:17:06.398 13:37:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:17:06.398 13:37:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:17:06.398 13:37:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.398 13:37:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:06.659 BaseBdev2 00:17:06.659 13:37:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.659 13:37:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:17:06.659 13:37:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:17:06.659 13:37:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:06.659 13:37:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:17:06.659 13:37:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:06.659 13:37:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:06.659 13:37:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:06.659 13:37:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.659 13:37:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:06.659 13:37:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.659 13:37:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:06.659 13:37:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.659 13:37:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:06.659 [ 00:17:06.659 { 00:17:06.659 "name": "BaseBdev2", 00:17:06.659 "aliases": [ 00:17:06.659 "5d7d0057-8f08-4ce6-b1d8-3b4b9c6e118a" 00:17:06.659 ], 00:17:06.659 "product_name": "Malloc disk", 00:17:06.659 "block_size": 512, 00:17:06.659 "num_blocks": 65536, 00:17:06.659 "uuid": "5d7d0057-8f08-4ce6-b1d8-3b4b9c6e118a", 00:17:06.659 "assigned_rate_limits": { 00:17:06.659 "rw_ios_per_sec": 0, 00:17:06.659 "rw_mbytes_per_sec": 0, 00:17:06.659 "r_mbytes_per_sec": 0, 00:17:06.659 "w_mbytes_per_sec": 0 00:17:06.659 }, 00:17:06.659 "claimed": false, 00:17:06.659 "zoned": false, 00:17:06.659 "supported_io_types": { 00:17:06.659 "read": true, 00:17:06.659 "write": true, 00:17:06.659 "unmap": true, 00:17:06.659 "flush": true, 00:17:06.659 "reset": true, 00:17:06.659 "nvme_admin": false, 00:17:06.659 "nvme_io": false, 00:17:06.659 "nvme_io_md": false, 00:17:06.659 "write_zeroes": true, 00:17:06.659 "zcopy": true, 00:17:06.659 "get_zone_info": false, 00:17:06.659 "zone_management": false, 00:17:06.659 "zone_append": false, 00:17:06.659 "compare": false, 00:17:06.659 "compare_and_write": false, 00:17:06.659 "abort": true, 00:17:06.659 "seek_hole": false, 00:17:06.659 "seek_data": false, 00:17:06.659 "copy": true, 00:17:06.659 "nvme_iov_md": false 00:17:06.659 }, 00:17:06.659 "memory_domains": [ 00:17:06.659 { 00:17:06.659 "dma_device_id": "system", 00:17:06.659 "dma_device_type": 1 00:17:06.659 }, 00:17:06.659 { 00:17:06.659 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:06.659 "dma_device_type": 2 00:17:06.659 } 00:17:06.659 ], 00:17:06.659 "driver_specific": {} 00:17:06.659 } 00:17:06.659 ] 00:17:06.659 13:37:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.659 13:37:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:17:06.659 13:37:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:17:06.659 13:37:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:17:06.659 13:37:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:17:06.659 13:37:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.659 13:37:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:06.659 BaseBdev3 00:17:06.659 13:37:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.659 13:37:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:17:06.659 13:37:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:17:06.659 13:37:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:06.659 13:37:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:17:06.659 13:37:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:06.659 13:37:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:06.659 13:37:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:06.659 13:37:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.659 13:37:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:06.659 13:37:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.659 13:37:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:06.659 13:37:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.659 13:37:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:06.659 [ 00:17:06.659 { 00:17:06.659 "name": "BaseBdev3", 00:17:06.659 "aliases": [ 00:17:06.659 "e09f01a7-a7b4-470f-a305-ebebd7c3a3ac" 00:17:06.659 ], 00:17:06.659 "product_name": "Malloc disk", 00:17:06.659 "block_size": 512, 00:17:06.659 "num_blocks": 65536, 00:17:06.659 "uuid": "e09f01a7-a7b4-470f-a305-ebebd7c3a3ac", 00:17:06.659 "assigned_rate_limits": { 00:17:06.659 "rw_ios_per_sec": 0, 00:17:06.659 "rw_mbytes_per_sec": 0, 00:17:06.659 "r_mbytes_per_sec": 0, 00:17:06.659 "w_mbytes_per_sec": 0 00:17:06.659 }, 00:17:06.659 "claimed": false, 00:17:06.659 "zoned": false, 00:17:06.659 "supported_io_types": { 00:17:06.659 "read": true, 00:17:06.659 "write": true, 00:17:06.659 "unmap": true, 00:17:06.659 "flush": true, 00:17:06.659 "reset": true, 00:17:06.659 "nvme_admin": false, 00:17:06.659 "nvme_io": false, 00:17:06.659 "nvme_io_md": false, 00:17:06.659 "write_zeroes": true, 00:17:06.659 "zcopy": true, 00:17:06.659 "get_zone_info": false, 00:17:06.659 "zone_management": false, 00:17:06.659 "zone_append": false, 00:17:06.659 "compare": false, 00:17:06.659 "compare_and_write": false, 00:17:06.659 "abort": true, 00:17:06.659 "seek_hole": false, 00:17:06.659 "seek_data": false, 00:17:06.659 "copy": true, 00:17:06.659 "nvme_iov_md": false 00:17:06.659 }, 00:17:06.659 "memory_domains": [ 00:17:06.659 { 00:17:06.659 "dma_device_id": "system", 00:17:06.659 "dma_device_type": 1 00:17:06.659 }, 00:17:06.659 { 00:17:06.659 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:06.659 "dma_device_type": 2 00:17:06.659 } 00:17:06.659 ], 00:17:06.659 "driver_specific": {} 00:17:06.659 } 00:17:06.659 ] 00:17:06.659 13:37:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.659 13:37:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:17:06.659 13:37:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:17:06.659 13:37:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:17:06.659 13:37:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:17:06.659 13:37:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.659 13:37:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:06.659 BaseBdev4 00:17:06.659 13:37:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.659 13:37:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:17:06.659 13:37:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:17:06.659 13:37:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:06.659 13:37:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:17:06.659 13:37:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:06.659 13:37:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:06.659 13:37:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:06.659 13:37:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.659 13:37:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:06.659 13:37:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.659 13:37:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:17:06.659 13:37:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.659 13:37:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:06.659 [ 00:17:06.659 { 00:17:06.660 "name": "BaseBdev4", 00:17:06.660 "aliases": [ 00:17:06.660 "e3d4e692-c7a0-4e71-9c30-bbac1dd6f1ac" 00:17:06.660 ], 00:17:06.660 "product_name": "Malloc disk", 00:17:06.660 "block_size": 512, 00:17:06.660 "num_blocks": 65536, 00:17:06.660 "uuid": "e3d4e692-c7a0-4e71-9c30-bbac1dd6f1ac", 00:17:06.660 "assigned_rate_limits": { 00:17:06.660 "rw_ios_per_sec": 0, 00:17:06.660 "rw_mbytes_per_sec": 0, 00:17:06.660 "r_mbytes_per_sec": 0, 00:17:06.660 "w_mbytes_per_sec": 0 00:17:06.660 }, 00:17:06.660 "claimed": false, 00:17:06.660 "zoned": false, 00:17:06.660 "supported_io_types": { 00:17:06.660 "read": true, 00:17:06.660 "write": true, 00:17:06.660 "unmap": true, 00:17:06.660 "flush": true, 00:17:06.660 "reset": true, 00:17:06.660 "nvme_admin": false, 00:17:06.660 "nvme_io": false, 00:17:06.660 "nvme_io_md": false, 00:17:06.660 "write_zeroes": true, 00:17:06.660 "zcopy": true, 00:17:06.660 "get_zone_info": false, 00:17:06.660 "zone_management": false, 00:17:06.660 "zone_append": false, 00:17:06.660 "compare": false, 00:17:06.660 "compare_and_write": false, 00:17:06.660 "abort": true, 00:17:06.660 "seek_hole": false, 00:17:06.660 "seek_data": false, 00:17:06.660 "copy": true, 00:17:06.660 "nvme_iov_md": false 00:17:06.660 }, 00:17:06.660 "memory_domains": [ 00:17:06.660 { 00:17:06.660 "dma_device_id": "system", 00:17:06.660 "dma_device_type": 1 00:17:06.660 }, 00:17:06.660 { 00:17:06.660 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:06.660 "dma_device_type": 2 00:17:06.660 } 00:17:06.660 ], 00:17:06.660 "driver_specific": {} 00:17:06.660 } 00:17:06.660 ] 00:17:06.660 13:37:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.660 13:37:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:17:06.660 13:37:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:17:06.660 13:37:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:17:06.660 13:37:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:17:06.660 13:37:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.660 13:37:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:06.660 [2024-11-20 13:37:06.111466] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:06.660 [2024-11-20 13:37:06.111523] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:06.660 [2024-11-20 13:37:06.111545] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:06.660 [2024-11-20 13:37:06.113710] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:06.660 [2024-11-20 13:37:06.113764] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:06.660 13:37:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.660 13:37:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:17:06.660 13:37:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:06.660 13:37:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:06.660 13:37:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:06.660 13:37:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:06.660 13:37:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:06.660 13:37:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:06.660 13:37:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:06.660 13:37:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:06.660 13:37:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:06.660 13:37:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:06.660 13:37:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.660 13:37:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:06.660 13:37:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:06.919 13:37:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.919 13:37:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:06.919 "name": "Existed_Raid", 00:17:06.919 "uuid": "c5973937-bf07-461a-8c1e-ca9110847831", 00:17:06.919 "strip_size_kb": 0, 00:17:06.919 "state": "configuring", 00:17:06.919 "raid_level": "raid1", 00:17:06.919 "superblock": true, 00:17:06.919 "num_base_bdevs": 4, 00:17:06.919 "num_base_bdevs_discovered": 3, 00:17:06.919 "num_base_bdevs_operational": 4, 00:17:06.919 "base_bdevs_list": [ 00:17:06.919 { 00:17:06.919 "name": "BaseBdev1", 00:17:06.919 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:06.919 "is_configured": false, 00:17:06.919 "data_offset": 0, 00:17:06.919 "data_size": 0 00:17:06.919 }, 00:17:06.919 { 00:17:06.919 "name": "BaseBdev2", 00:17:06.919 "uuid": "5d7d0057-8f08-4ce6-b1d8-3b4b9c6e118a", 00:17:06.919 "is_configured": true, 00:17:06.919 "data_offset": 2048, 00:17:06.919 "data_size": 63488 00:17:06.919 }, 00:17:06.919 { 00:17:06.919 "name": "BaseBdev3", 00:17:06.919 "uuid": "e09f01a7-a7b4-470f-a305-ebebd7c3a3ac", 00:17:06.919 "is_configured": true, 00:17:06.919 "data_offset": 2048, 00:17:06.919 "data_size": 63488 00:17:06.919 }, 00:17:06.919 { 00:17:06.919 "name": "BaseBdev4", 00:17:06.919 "uuid": "e3d4e692-c7a0-4e71-9c30-bbac1dd6f1ac", 00:17:06.919 "is_configured": true, 00:17:06.919 "data_offset": 2048, 00:17:06.919 "data_size": 63488 00:17:06.919 } 00:17:06.919 ] 00:17:06.919 }' 00:17:06.919 13:37:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:06.919 13:37:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:07.179 13:37:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:17:07.179 13:37:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.179 13:37:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:07.179 [2024-11-20 13:37:06.546908] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:07.179 13:37:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.179 13:37:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:17:07.179 13:37:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:07.179 13:37:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:07.179 13:37:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:07.179 13:37:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:07.179 13:37:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:07.179 13:37:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:07.179 13:37:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:07.179 13:37:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:07.179 13:37:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:07.179 13:37:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:07.179 13:37:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.179 13:37:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:07.179 13:37:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:07.179 13:37:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.179 13:37:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:07.179 "name": "Existed_Raid", 00:17:07.179 "uuid": "c5973937-bf07-461a-8c1e-ca9110847831", 00:17:07.179 "strip_size_kb": 0, 00:17:07.179 "state": "configuring", 00:17:07.179 "raid_level": "raid1", 00:17:07.179 "superblock": true, 00:17:07.179 "num_base_bdevs": 4, 00:17:07.179 "num_base_bdevs_discovered": 2, 00:17:07.179 "num_base_bdevs_operational": 4, 00:17:07.179 "base_bdevs_list": [ 00:17:07.179 { 00:17:07.179 "name": "BaseBdev1", 00:17:07.179 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:07.179 "is_configured": false, 00:17:07.179 "data_offset": 0, 00:17:07.179 "data_size": 0 00:17:07.179 }, 00:17:07.179 { 00:17:07.179 "name": null, 00:17:07.179 "uuid": "5d7d0057-8f08-4ce6-b1d8-3b4b9c6e118a", 00:17:07.179 "is_configured": false, 00:17:07.179 "data_offset": 0, 00:17:07.179 "data_size": 63488 00:17:07.179 }, 00:17:07.179 { 00:17:07.179 "name": "BaseBdev3", 00:17:07.179 "uuid": "e09f01a7-a7b4-470f-a305-ebebd7c3a3ac", 00:17:07.179 "is_configured": true, 00:17:07.179 "data_offset": 2048, 00:17:07.179 "data_size": 63488 00:17:07.179 }, 00:17:07.179 { 00:17:07.179 "name": "BaseBdev4", 00:17:07.179 "uuid": "e3d4e692-c7a0-4e71-9c30-bbac1dd6f1ac", 00:17:07.179 "is_configured": true, 00:17:07.179 "data_offset": 2048, 00:17:07.179 "data_size": 63488 00:17:07.179 } 00:17:07.179 ] 00:17:07.179 }' 00:17:07.179 13:37:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:07.179 13:37:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:07.748 13:37:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:07.748 13:37:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.748 13:37:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:07.748 13:37:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:17:07.748 13:37:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.748 13:37:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:17:07.748 13:37:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:17:07.748 13:37:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.748 13:37:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:07.748 [2024-11-20 13:37:07.048390] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:07.748 BaseBdev1 00:17:07.748 13:37:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.748 13:37:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:17:07.748 13:37:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:17:07.748 13:37:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:07.748 13:37:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:17:07.748 13:37:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:07.748 13:37:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:07.748 13:37:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:07.748 13:37:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.748 13:37:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:07.748 13:37:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.748 13:37:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:07.748 13:37:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.748 13:37:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:07.748 [ 00:17:07.748 { 00:17:07.748 "name": "BaseBdev1", 00:17:07.748 "aliases": [ 00:17:07.748 "ff3a7453-cb37-4e03-97df-d05288983c2b" 00:17:07.748 ], 00:17:07.748 "product_name": "Malloc disk", 00:17:07.748 "block_size": 512, 00:17:07.748 "num_blocks": 65536, 00:17:07.749 "uuid": "ff3a7453-cb37-4e03-97df-d05288983c2b", 00:17:07.749 "assigned_rate_limits": { 00:17:07.749 "rw_ios_per_sec": 0, 00:17:07.749 "rw_mbytes_per_sec": 0, 00:17:07.749 "r_mbytes_per_sec": 0, 00:17:07.749 "w_mbytes_per_sec": 0 00:17:07.749 }, 00:17:07.749 "claimed": true, 00:17:07.749 "claim_type": "exclusive_write", 00:17:07.749 "zoned": false, 00:17:07.749 "supported_io_types": { 00:17:07.749 "read": true, 00:17:07.749 "write": true, 00:17:07.749 "unmap": true, 00:17:07.749 "flush": true, 00:17:07.749 "reset": true, 00:17:07.749 "nvme_admin": false, 00:17:07.749 "nvme_io": false, 00:17:07.749 "nvme_io_md": false, 00:17:07.749 "write_zeroes": true, 00:17:07.749 "zcopy": true, 00:17:07.749 "get_zone_info": false, 00:17:07.749 "zone_management": false, 00:17:07.749 "zone_append": false, 00:17:07.749 "compare": false, 00:17:07.749 "compare_and_write": false, 00:17:07.749 "abort": true, 00:17:07.749 "seek_hole": false, 00:17:07.749 "seek_data": false, 00:17:07.749 "copy": true, 00:17:07.749 "nvme_iov_md": false 00:17:07.749 }, 00:17:07.749 "memory_domains": [ 00:17:07.749 { 00:17:07.749 "dma_device_id": "system", 00:17:07.749 "dma_device_type": 1 00:17:07.749 }, 00:17:07.749 { 00:17:07.749 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:07.749 "dma_device_type": 2 00:17:07.749 } 00:17:07.749 ], 00:17:07.749 "driver_specific": {} 00:17:07.749 } 00:17:07.749 ] 00:17:07.749 13:37:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.749 13:37:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:17:07.749 13:37:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:17:07.749 13:37:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:07.749 13:37:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:07.749 13:37:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:07.749 13:37:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:07.749 13:37:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:07.749 13:37:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:07.749 13:37:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:07.749 13:37:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:07.749 13:37:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:07.749 13:37:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:07.749 13:37:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:07.749 13:37:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.749 13:37:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:07.749 13:37:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.749 13:37:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:07.749 "name": "Existed_Raid", 00:17:07.749 "uuid": "c5973937-bf07-461a-8c1e-ca9110847831", 00:17:07.749 "strip_size_kb": 0, 00:17:07.749 "state": "configuring", 00:17:07.749 "raid_level": "raid1", 00:17:07.749 "superblock": true, 00:17:07.749 "num_base_bdevs": 4, 00:17:07.749 "num_base_bdevs_discovered": 3, 00:17:07.749 "num_base_bdevs_operational": 4, 00:17:07.749 "base_bdevs_list": [ 00:17:07.749 { 00:17:07.749 "name": "BaseBdev1", 00:17:07.749 "uuid": "ff3a7453-cb37-4e03-97df-d05288983c2b", 00:17:07.749 "is_configured": true, 00:17:07.749 "data_offset": 2048, 00:17:07.749 "data_size": 63488 00:17:07.749 }, 00:17:07.749 { 00:17:07.749 "name": null, 00:17:07.749 "uuid": "5d7d0057-8f08-4ce6-b1d8-3b4b9c6e118a", 00:17:07.749 "is_configured": false, 00:17:07.749 "data_offset": 0, 00:17:07.749 "data_size": 63488 00:17:07.749 }, 00:17:07.749 { 00:17:07.749 "name": "BaseBdev3", 00:17:07.749 "uuid": "e09f01a7-a7b4-470f-a305-ebebd7c3a3ac", 00:17:07.749 "is_configured": true, 00:17:07.749 "data_offset": 2048, 00:17:07.749 "data_size": 63488 00:17:07.749 }, 00:17:07.749 { 00:17:07.749 "name": "BaseBdev4", 00:17:07.749 "uuid": "e3d4e692-c7a0-4e71-9c30-bbac1dd6f1ac", 00:17:07.749 "is_configured": true, 00:17:07.749 "data_offset": 2048, 00:17:07.749 "data_size": 63488 00:17:07.749 } 00:17:07.749 ] 00:17:07.749 }' 00:17:07.749 13:37:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:07.749 13:37:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:08.316 13:37:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:08.316 13:37:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.316 13:37:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:08.316 13:37:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:17:08.316 13:37:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.316 13:37:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:17:08.316 13:37:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:17:08.316 13:37:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.316 13:37:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:08.316 [2024-11-20 13:37:07.551809] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:08.316 13:37:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.316 13:37:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:17:08.316 13:37:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:08.316 13:37:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:08.316 13:37:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:08.316 13:37:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:08.316 13:37:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:08.316 13:37:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:08.316 13:37:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:08.316 13:37:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:08.316 13:37:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:08.316 13:37:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:08.316 13:37:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:08.316 13:37:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.316 13:37:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:08.316 13:37:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.316 13:37:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:08.316 "name": "Existed_Raid", 00:17:08.316 "uuid": "c5973937-bf07-461a-8c1e-ca9110847831", 00:17:08.316 "strip_size_kb": 0, 00:17:08.316 "state": "configuring", 00:17:08.316 "raid_level": "raid1", 00:17:08.316 "superblock": true, 00:17:08.316 "num_base_bdevs": 4, 00:17:08.316 "num_base_bdevs_discovered": 2, 00:17:08.316 "num_base_bdevs_operational": 4, 00:17:08.316 "base_bdevs_list": [ 00:17:08.316 { 00:17:08.316 "name": "BaseBdev1", 00:17:08.316 "uuid": "ff3a7453-cb37-4e03-97df-d05288983c2b", 00:17:08.316 "is_configured": true, 00:17:08.316 "data_offset": 2048, 00:17:08.316 "data_size": 63488 00:17:08.316 }, 00:17:08.316 { 00:17:08.316 "name": null, 00:17:08.316 "uuid": "5d7d0057-8f08-4ce6-b1d8-3b4b9c6e118a", 00:17:08.316 "is_configured": false, 00:17:08.316 "data_offset": 0, 00:17:08.316 "data_size": 63488 00:17:08.316 }, 00:17:08.316 { 00:17:08.316 "name": null, 00:17:08.317 "uuid": "e09f01a7-a7b4-470f-a305-ebebd7c3a3ac", 00:17:08.317 "is_configured": false, 00:17:08.317 "data_offset": 0, 00:17:08.317 "data_size": 63488 00:17:08.317 }, 00:17:08.317 { 00:17:08.317 "name": "BaseBdev4", 00:17:08.317 "uuid": "e3d4e692-c7a0-4e71-9c30-bbac1dd6f1ac", 00:17:08.317 "is_configured": true, 00:17:08.317 "data_offset": 2048, 00:17:08.317 "data_size": 63488 00:17:08.317 } 00:17:08.317 ] 00:17:08.317 }' 00:17:08.317 13:37:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:08.317 13:37:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:08.575 13:37:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:08.575 13:37:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.575 13:37:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:08.575 13:37:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:17:08.575 13:37:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.575 13:37:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:17:08.575 13:37:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:17:08.575 13:37:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.575 13:37:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:08.575 [2024-11-20 13:37:07.991239] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:08.575 13:37:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.575 13:37:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:17:08.575 13:37:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:08.575 13:37:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:08.575 13:37:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:08.575 13:37:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:08.575 13:37:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:08.575 13:37:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:08.575 13:37:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:08.575 13:37:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:08.575 13:37:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:08.575 13:37:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:08.575 13:37:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:08.575 13:37:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.575 13:37:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:08.575 13:37:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.575 13:37:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:08.575 "name": "Existed_Raid", 00:17:08.575 "uuid": "c5973937-bf07-461a-8c1e-ca9110847831", 00:17:08.575 "strip_size_kb": 0, 00:17:08.575 "state": "configuring", 00:17:08.575 "raid_level": "raid1", 00:17:08.575 "superblock": true, 00:17:08.575 "num_base_bdevs": 4, 00:17:08.575 "num_base_bdevs_discovered": 3, 00:17:08.575 "num_base_bdevs_operational": 4, 00:17:08.575 "base_bdevs_list": [ 00:17:08.575 { 00:17:08.575 "name": "BaseBdev1", 00:17:08.575 "uuid": "ff3a7453-cb37-4e03-97df-d05288983c2b", 00:17:08.575 "is_configured": true, 00:17:08.575 "data_offset": 2048, 00:17:08.575 "data_size": 63488 00:17:08.575 }, 00:17:08.575 { 00:17:08.575 "name": null, 00:17:08.575 "uuid": "5d7d0057-8f08-4ce6-b1d8-3b4b9c6e118a", 00:17:08.575 "is_configured": false, 00:17:08.575 "data_offset": 0, 00:17:08.575 "data_size": 63488 00:17:08.575 }, 00:17:08.575 { 00:17:08.575 "name": "BaseBdev3", 00:17:08.575 "uuid": "e09f01a7-a7b4-470f-a305-ebebd7c3a3ac", 00:17:08.575 "is_configured": true, 00:17:08.575 "data_offset": 2048, 00:17:08.575 "data_size": 63488 00:17:08.575 }, 00:17:08.576 { 00:17:08.576 "name": "BaseBdev4", 00:17:08.576 "uuid": "e3d4e692-c7a0-4e71-9c30-bbac1dd6f1ac", 00:17:08.576 "is_configured": true, 00:17:08.576 "data_offset": 2048, 00:17:08.576 "data_size": 63488 00:17:08.576 } 00:17:08.576 ] 00:17:08.576 }' 00:17:08.576 13:37:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:08.576 13:37:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:09.143 13:37:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:09.143 13:37:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:17:09.143 13:37:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.143 13:37:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:09.143 13:37:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.143 13:37:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:17:09.143 13:37:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:17:09.143 13:37:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.143 13:37:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:09.143 [2024-11-20 13:37:08.446783] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:09.143 13:37:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.143 13:37:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:17:09.143 13:37:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:09.143 13:37:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:09.143 13:37:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:09.143 13:37:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:09.143 13:37:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:09.143 13:37:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:09.143 13:37:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:09.143 13:37:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:09.143 13:37:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:09.143 13:37:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:09.143 13:37:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:09.143 13:37:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.143 13:37:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:09.143 13:37:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.143 13:37:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:09.143 "name": "Existed_Raid", 00:17:09.143 "uuid": "c5973937-bf07-461a-8c1e-ca9110847831", 00:17:09.143 "strip_size_kb": 0, 00:17:09.144 "state": "configuring", 00:17:09.144 "raid_level": "raid1", 00:17:09.144 "superblock": true, 00:17:09.144 "num_base_bdevs": 4, 00:17:09.144 "num_base_bdevs_discovered": 2, 00:17:09.144 "num_base_bdevs_operational": 4, 00:17:09.144 "base_bdevs_list": [ 00:17:09.144 { 00:17:09.144 "name": null, 00:17:09.144 "uuid": "ff3a7453-cb37-4e03-97df-d05288983c2b", 00:17:09.144 "is_configured": false, 00:17:09.144 "data_offset": 0, 00:17:09.144 "data_size": 63488 00:17:09.144 }, 00:17:09.144 { 00:17:09.144 "name": null, 00:17:09.144 "uuid": "5d7d0057-8f08-4ce6-b1d8-3b4b9c6e118a", 00:17:09.144 "is_configured": false, 00:17:09.144 "data_offset": 0, 00:17:09.144 "data_size": 63488 00:17:09.144 }, 00:17:09.144 { 00:17:09.144 "name": "BaseBdev3", 00:17:09.144 "uuid": "e09f01a7-a7b4-470f-a305-ebebd7c3a3ac", 00:17:09.144 "is_configured": true, 00:17:09.144 "data_offset": 2048, 00:17:09.144 "data_size": 63488 00:17:09.144 }, 00:17:09.144 { 00:17:09.144 "name": "BaseBdev4", 00:17:09.144 "uuid": "e3d4e692-c7a0-4e71-9c30-bbac1dd6f1ac", 00:17:09.144 "is_configured": true, 00:17:09.144 "data_offset": 2048, 00:17:09.144 "data_size": 63488 00:17:09.144 } 00:17:09.144 ] 00:17:09.144 }' 00:17:09.144 13:37:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:09.144 13:37:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:09.726 13:37:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:09.726 13:37:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.726 13:37:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:09.726 13:37:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:17:09.726 13:37:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.726 13:37:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:17:09.726 13:37:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:17:09.726 13:37:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.726 13:37:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:09.726 [2024-11-20 13:37:09.024767] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:09.726 13:37:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.726 13:37:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:17:09.726 13:37:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:09.726 13:37:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:09.726 13:37:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:09.727 13:37:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:09.727 13:37:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:09.727 13:37:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:09.727 13:37:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:09.727 13:37:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:09.727 13:37:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:09.727 13:37:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:09.727 13:37:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:09.727 13:37:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.727 13:37:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:09.727 13:37:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.727 13:37:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:09.727 "name": "Existed_Raid", 00:17:09.727 "uuid": "c5973937-bf07-461a-8c1e-ca9110847831", 00:17:09.727 "strip_size_kb": 0, 00:17:09.727 "state": "configuring", 00:17:09.727 "raid_level": "raid1", 00:17:09.727 "superblock": true, 00:17:09.727 "num_base_bdevs": 4, 00:17:09.727 "num_base_bdevs_discovered": 3, 00:17:09.727 "num_base_bdevs_operational": 4, 00:17:09.727 "base_bdevs_list": [ 00:17:09.727 { 00:17:09.727 "name": null, 00:17:09.727 "uuid": "ff3a7453-cb37-4e03-97df-d05288983c2b", 00:17:09.727 "is_configured": false, 00:17:09.727 "data_offset": 0, 00:17:09.727 "data_size": 63488 00:17:09.727 }, 00:17:09.727 { 00:17:09.727 "name": "BaseBdev2", 00:17:09.727 "uuid": "5d7d0057-8f08-4ce6-b1d8-3b4b9c6e118a", 00:17:09.727 "is_configured": true, 00:17:09.727 "data_offset": 2048, 00:17:09.727 "data_size": 63488 00:17:09.727 }, 00:17:09.727 { 00:17:09.727 "name": "BaseBdev3", 00:17:09.727 "uuid": "e09f01a7-a7b4-470f-a305-ebebd7c3a3ac", 00:17:09.727 "is_configured": true, 00:17:09.727 "data_offset": 2048, 00:17:09.727 "data_size": 63488 00:17:09.727 }, 00:17:09.727 { 00:17:09.727 "name": "BaseBdev4", 00:17:09.727 "uuid": "e3d4e692-c7a0-4e71-9c30-bbac1dd6f1ac", 00:17:09.727 "is_configured": true, 00:17:09.727 "data_offset": 2048, 00:17:09.727 "data_size": 63488 00:17:09.727 } 00:17:09.727 ] 00:17:09.727 }' 00:17:09.727 13:37:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:09.727 13:37:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:10.014 13:37:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:10.014 13:37:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.014 13:37:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:10.014 13:37:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:17:10.014 13:37:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.274 13:37:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:17:10.274 13:37:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:10.274 13:37:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:17:10.274 13:37:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.274 13:37:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:10.274 13:37:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.275 13:37:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u ff3a7453-cb37-4e03-97df-d05288983c2b 00:17:10.275 13:37:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.275 13:37:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:10.275 [2024-11-20 13:37:09.596563] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:17:10.275 [2024-11-20 13:37:09.596809] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:17:10.275 [2024-11-20 13:37:09.596829] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:17:10.275 [2024-11-20 13:37:09.597138] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:17:10.275 [2024-11-20 13:37:09.597293] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:17:10.275 [2024-11-20 13:37:09.597305] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:17:10.275 NewBaseBdev 00:17:10.275 [2024-11-20 13:37:09.597445] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:10.275 13:37:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.275 13:37:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:17:10.275 13:37:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:17:10.275 13:37:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:10.275 13:37:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:17:10.275 13:37:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:10.275 13:37:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:10.275 13:37:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:10.275 13:37:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.275 13:37:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:10.275 13:37:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.275 13:37:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:17:10.275 13:37:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.275 13:37:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:10.275 [ 00:17:10.275 { 00:17:10.275 "name": "NewBaseBdev", 00:17:10.275 "aliases": [ 00:17:10.275 "ff3a7453-cb37-4e03-97df-d05288983c2b" 00:17:10.275 ], 00:17:10.275 "product_name": "Malloc disk", 00:17:10.275 "block_size": 512, 00:17:10.275 "num_blocks": 65536, 00:17:10.275 "uuid": "ff3a7453-cb37-4e03-97df-d05288983c2b", 00:17:10.275 "assigned_rate_limits": { 00:17:10.275 "rw_ios_per_sec": 0, 00:17:10.275 "rw_mbytes_per_sec": 0, 00:17:10.275 "r_mbytes_per_sec": 0, 00:17:10.275 "w_mbytes_per_sec": 0 00:17:10.275 }, 00:17:10.275 "claimed": true, 00:17:10.275 "claim_type": "exclusive_write", 00:17:10.275 "zoned": false, 00:17:10.275 "supported_io_types": { 00:17:10.275 "read": true, 00:17:10.275 "write": true, 00:17:10.275 "unmap": true, 00:17:10.275 "flush": true, 00:17:10.275 "reset": true, 00:17:10.275 "nvme_admin": false, 00:17:10.275 "nvme_io": false, 00:17:10.275 "nvme_io_md": false, 00:17:10.275 "write_zeroes": true, 00:17:10.275 "zcopy": true, 00:17:10.275 "get_zone_info": false, 00:17:10.275 "zone_management": false, 00:17:10.275 "zone_append": false, 00:17:10.275 "compare": false, 00:17:10.275 "compare_and_write": false, 00:17:10.275 "abort": true, 00:17:10.275 "seek_hole": false, 00:17:10.275 "seek_data": false, 00:17:10.275 "copy": true, 00:17:10.275 "nvme_iov_md": false 00:17:10.275 }, 00:17:10.275 "memory_domains": [ 00:17:10.275 { 00:17:10.275 "dma_device_id": "system", 00:17:10.275 "dma_device_type": 1 00:17:10.275 }, 00:17:10.275 { 00:17:10.275 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:10.275 "dma_device_type": 2 00:17:10.275 } 00:17:10.275 ], 00:17:10.275 "driver_specific": {} 00:17:10.275 } 00:17:10.275 ] 00:17:10.275 13:37:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.275 13:37:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:17:10.275 13:37:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:17:10.275 13:37:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:10.275 13:37:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:10.275 13:37:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:10.275 13:37:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:10.275 13:37:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:10.275 13:37:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:10.275 13:37:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:10.275 13:37:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:10.275 13:37:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:10.275 13:37:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:10.275 13:37:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.275 13:37:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:10.275 13:37:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:10.275 13:37:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.275 13:37:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:10.275 "name": "Existed_Raid", 00:17:10.275 "uuid": "c5973937-bf07-461a-8c1e-ca9110847831", 00:17:10.275 "strip_size_kb": 0, 00:17:10.275 "state": "online", 00:17:10.275 "raid_level": "raid1", 00:17:10.275 "superblock": true, 00:17:10.275 "num_base_bdevs": 4, 00:17:10.275 "num_base_bdevs_discovered": 4, 00:17:10.275 "num_base_bdevs_operational": 4, 00:17:10.275 "base_bdevs_list": [ 00:17:10.275 { 00:17:10.275 "name": "NewBaseBdev", 00:17:10.275 "uuid": "ff3a7453-cb37-4e03-97df-d05288983c2b", 00:17:10.275 "is_configured": true, 00:17:10.275 "data_offset": 2048, 00:17:10.275 "data_size": 63488 00:17:10.275 }, 00:17:10.275 { 00:17:10.275 "name": "BaseBdev2", 00:17:10.275 "uuid": "5d7d0057-8f08-4ce6-b1d8-3b4b9c6e118a", 00:17:10.275 "is_configured": true, 00:17:10.275 "data_offset": 2048, 00:17:10.275 "data_size": 63488 00:17:10.275 }, 00:17:10.275 { 00:17:10.275 "name": "BaseBdev3", 00:17:10.275 "uuid": "e09f01a7-a7b4-470f-a305-ebebd7c3a3ac", 00:17:10.275 "is_configured": true, 00:17:10.275 "data_offset": 2048, 00:17:10.275 "data_size": 63488 00:17:10.275 }, 00:17:10.275 { 00:17:10.275 "name": "BaseBdev4", 00:17:10.275 "uuid": "e3d4e692-c7a0-4e71-9c30-bbac1dd6f1ac", 00:17:10.275 "is_configured": true, 00:17:10.275 "data_offset": 2048, 00:17:10.275 "data_size": 63488 00:17:10.275 } 00:17:10.275 ] 00:17:10.275 }' 00:17:10.275 13:37:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:10.275 13:37:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:10.845 13:37:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:17:10.845 13:37:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:17:10.845 13:37:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:10.845 13:37:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:10.845 13:37:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:17:10.845 13:37:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:10.845 13:37:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:17:10.845 13:37:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.845 13:37:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:10.845 13:37:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:10.845 [2024-11-20 13:37:10.092299] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:10.845 13:37:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.845 13:37:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:10.845 "name": "Existed_Raid", 00:17:10.845 "aliases": [ 00:17:10.845 "c5973937-bf07-461a-8c1e-ca9110847831" 00:17:10.845 ], 00:17:10.845 "product_name": "Raid Volume", 00:17:10.845 "block_size": 512, 00:17:10.845 "num_blocks": 63488, 00:17:10.845 "uuid": "c5973937-bf07-461a-8c1e-ca9110847831", 00:17:10.845 "assigned_rate_limits": { 00:17:10.845 "rw_ios_per_sec": 0, 00:17:10.845 "rw_mbytes_per_sec": 0, 00:17:10.845 "r_mbytes_per_sec": 0, 00:17:10.845 "w_mbytes_per_sec": 0 00:17:10.845 }, 00:17:10.845 "claimed": false, 00:17:10.845 "zoned": false, 00:17:10.845 "supported_io_types": { 00:17:10.845 "read": true, 00:17:10.845 "write": true, 00:17:10.845 "unmap": false, 00:17:10.845 "flush": false, 00:17:10.845 "reset": true, 00:17:10.845 "nvme_admin": false, 00:17:10.845 "nvme_io": false, 00:17:10.845 "nvme_io_md": false, 00:17:10.845 "write_zeroes": true, 00:17:10.845 "zcopy": false, 00:17:10.845 "get_zone_info": false, 00:17:10.845 "zone_management": false, 00:17:10.845 "zone_append": false, 00:17:10.845 "compare": false, 00:17:10.845 "compare_and_write": false, 00:17:10.845 "abort": false, 00:17:10.845 "seek_hole": false, 00:17:10.845 "seek_data": false, 00:17:10.845 "copy": false, 00:17:10.845 "nvme_iov_md": false 00:17:10.845 }, 00:17:10.845 "memory_domains": [ 00:17:10.845 { 00:17:10.845 "dma_device_id": "system", 00:17:10.845 "dma_device_type": 1 00:17:10.845 }, 00:17:10.845 { 00:17:10.845 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:10.845 "dma_device_type": 2 00:17:10.845 }, 00:17:10.845 { 00:17:10.845 "dma_device_id": "system", 00:17:10.845 "dma_device_type": 1 00:17:10.845 }, 00:17:10.845 { 00:17:10.845 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:10.845 "dma_device_type": 2 00:17:10.845 }, 00:17:10.845 { 00:17:10.845 "dma_device_id": "system", 00:17:10.845 "dma_device_type": 1 00:17:10.846 }, 00:17:10.846 { 00:17:10.846 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:10.846 "dma_device_type": 2 00:17:10.846 }, 00:17:10.846 { 00:17:10.846 "dma_device_id": "system", 00:17:10.846 "dma_device_type": 1 00:17:10.846 }, 00:17:10.846 { 00:17:10.846 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:10.846 "dma_device_type": 2 00:17:10.846 } 00:17:10.846 ], 00:17:10.846 "driver_specific": { 00:17:10.846 "raid": { 00:17:10.846 "uuid": "c5973937-bf07-461a-8c1e-ca9110847831", 00:17:10.846 "strip_size_kb": 0, 00:17:10.846 "state": "online", 00:17:10.846 "raid_level": "raid1", 00:17:10.846 "superblock": true, 00:17:10.846 "num_base_bdevs": 4, 00:17:10.846 "num_base_bdevs_discovered": 4, 00:17:10.846 "num_base_bdevs_operational": 4, 00:17:10.846 "base_bdevs_list": [ 00:17:10.846 { 00:17:10.846 "name": "NewBaseBdev", 00:17:10.846 "uuid": "ff3a7453-cb37-4e03-97df-d05288983c2b", 00:17:10.846 "is_configured": true, 00:17:10.846 "data_offset": 2048, 00:17:10.846 "data_size": 63488 00:17:10.846 }, 00:17:10.846 { 00:17:10.846 "name": "BaseBdev2", 00:17:10.846 "uuid": "5d7d0057-8f08-4ce6-b1d8-3b4b9c6e118a", 00:17:10.846 "is_configured": true, 00:17:10.846 "data_offset": 2048, 00:17:10.846 "data_size": 63488 00:17:10.846 }, 00:17:10.846 { 00:17:10.846 "name": "BaseBdev3", 00:17:10.846 "uuid": "e09f01a7-a7b4-470f-a305-ebebd7c3a3ac", 00:17:10.846 "is_configured": true, 00:17:10.846 "data_offset": 2048, 00:17:10.846 "data_size": 63488 00:17:10.846 }, 00:17:10.846 { 00:17:10.846 "name": "BaseBdev4", 00:17:10.846 "uuid": "e3d4e692-c7a0-4e71-9c30-bbac1dd6f1ac", 00:17:10.846 "is_configured": true, 00:17:10.846 "data_offset": 2048, 00:17:10.846 "data_size": 63488 00:17:10.846 } 00:17:10.846 ] 00:17:10.846 } 00:17:10.846 } 00:17:10.846 }' 00:17:10.846 13:37:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:10.846 13:37:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:17:10.846 BaseBdev2 00:17:10.846 BaseBdev3 00:17:10.846 BaseBdev4' 00:17:10.846 13:37:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:10.846 13:37:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:17:10.846 13:37:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:10.846 13:37:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:17:10.846 13:37:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.846 13:37:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:10.846 13:37:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:10.846 13:37:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.846 13:37:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:10.846 13:37:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:10.846 13:37:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:10.846 13:37:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:10.846 13:37:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:17:10.846 13:37:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.846 13:37:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:10.846 13:37:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.846 13:37:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:10.846 13:37:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:10.846 13:37:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:10.846 13:37:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:17:10.846 13:37:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.846 13:37:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:10.846 13:37:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:10.846 13:37:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.106 13:37:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:11.106 13:37:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:11.106 13:37:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:11.106 13:37:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:17:11.106 13:37:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.106 13:37:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:11.106 13:37:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:11.106 13:37:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.106 13:37:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:11.106 13:37:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:11.106 13:37:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:11.106 13:37:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.106 13:37:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:11.106 [2024-11-20 13:37:10.375526] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:11.106 [2024-11-20 13:37:10.375703] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:11.106 [2024-11-20 13:37:10.375823] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:11.106 [2024-11-20 13:37:10.376146] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:11.106 [2024-11-20 13:37:10.376166] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:17:11.106 13:37:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.106 13:37:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 73595 00:17:11.106 13:37:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 73595 ']' 00:17:11.106 13:37:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 73595 00:17:11.106 13:37:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:17:11.106 13:37:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:11.106 13:37:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73595 00:17:11.106 13:37:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:11.106 13:37:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:11.106 13:37:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73595' 00:17:11.106 killing process with pid 73595 00:17:11.106 13:37:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 73595 00:17:11.106 [2024-11-20 13:37:10.427755] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:11.106 13:37:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 73595 00:17:11.365 [2024-11-20 13:37:10.833537] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:12.741 13:37:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:17:12.741 00:17:12.741 real 0m11.135s 00:17:12.741 user 0m17.676s 00:17:12.741 sys 0m2.161s 00:17:12.741 13:37:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:12.741 13:37:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:12.741 ************************************ 00:17:12.741 END TEST raid_state_function_test_sb 00:17:12.741 ************************************ 00:17:12.741 13:37:12 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 4 00:17:12.741 13:37:12 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:17:12.741 13:37:12 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:12.741 13:37:12 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:12.741 ************************************ 00:17:12.741 START TEST raid_superblock_test 00:17:12.741 ************************************ 00:17:12.741 13:37:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 4 00:17:12.741 13:37:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:17:12.741 13:37:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:17:12.741 13:37:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:17:12.742 13:37:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:17:12.742 13:37:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:17:12.742 13:37:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:17:12.742 13:37:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:17:12.742 13:37:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:17:12.742 13:37:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:17:12.742 13:37:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:17:12.742 13:37:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:17:12.742 13:37:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:17:12.742 13:37:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:17:12.742 13:37:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:17:12.742 13:37:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:17:12.742 13:37:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=74260 00:17:12.742 13:37:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 74260 00:17:12.742 13:37:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 74260 ']' 00:17:12.742 13:37:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:12.742 13:37:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:12.742 13:37:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:12.742 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:12.742 13:37:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:17:12.742 13:37:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:12.742 13:37:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:12.742 [2024-11-20 13:37:12.141187] Starting SPDK v25.01-pre git sha1 82b85d9ca / DPDK 24.03.0 initialization... 00:17:12.742 [2024-11-20 13:37:12.141307] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74260 ] 00:17:13.001 [2024-11-20 13:37:12.320352] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:13.001 [2024-11-20 13:37:12.438625] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:13.260 [2024-11-20 13:37:12.637226] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:13.260 [2024-11-20 13:37:12.637293] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:13.828 13:37:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:13.828 13:37:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:17:13.828 13:37:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:17:13.828 13:37:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:13.828 13:37:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:17:13.828 13:37:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:17:13.828 13:37:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:17:13.828 13:37:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:13.828 13:37:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:13.828 13:37:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:13.828 13:37:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:17:13.828 13:37:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.828 13:37:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:13.828 malloc1 00:17:13.828 13:37:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.828 13:37:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:13.828 13:37:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.828 13:37:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:13.828 [2024-11-20 13:37:13.172642] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:13.829 [2024-11-20 13:37:13.172848] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:13.829 [2024-11-20 13:37:13.172911] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:13.829 [2024-11-20 13:37:13.173000] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:13.829 [2024-11-20 13:37:13.175564] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:13.829 [2024-11-20 13:37:13.175764] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:13.829 pt1 00:17:13.829 13:37:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.829 13:37:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:13.829 13:37:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:13.829 13:37:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:17:13.829 13:37:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:17:13.829 13:37:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:17:13.829 13:37:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:13.829 13:37:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:13.829 13:37:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:13.829 13:37:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:17:13.829 13:37:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.829 13:37:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:13.829 malloc2 00:17:13.829 13:37:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.829 13:37:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:13.829 13:37:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.829 13:37:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:13.829 [2024-11-20 13:37:13.222294] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:13.829 [2024-11-20 13:37:13.222359] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:13.829 [2024-11-20 13:37:13.222389] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:13.829 [2024-11-20 13:37:13.222401] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:13.829 [2024-11-20 13:37:13.224875] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:13.829 [2024-11-20 13:37:13.224913] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:13.829 pt2 00:17:13.829 13:37:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.829 13:37:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:13.829 13:37:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:13.829 13:37:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:17:13.829 13:37:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:17:13.829 13:37:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:17:13.829 13:37:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:13.829 13:37:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:13.829 13:37:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:13.829 13:37:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:17:13.829 13:37:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.829 13:37:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:13.829 malloc3 00:17:13.829 13:37:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.829 13:37:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:17:13.829 13:37:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.829 13:37:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:13.829 [2024-11-20 13:37:13.284084] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:17:13.829 [2024-11-20 13:37:13.284141] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:13.829 [2024-11-20 13:37:13.284164] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:17:13.829 [2024-11-20 13:37:13.284176] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:13.829 [2024-11-20 13:37:13.286553] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:13.829 [2024-11-20 13:37:13.286592] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:17:13.829 pt3 00:17:13.829 13:37:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.829 13:37:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:13.829 13:37:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:13.829 13:37:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:17:13.829 13:37:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:17:13.829 13:37:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:17:13.829 13:37:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:13.829 13:37:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:13.829 13:37:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:13.829 13:37:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:17:13.829 13:37:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.829 13:37:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:14.088 malloc4 00:17:14.088 13:37:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.088 13:37:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:17:14.088 13:37:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.088 13:37:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:14.088 [2024-11-20 13:37:13.334756] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:17:14.088 [2024-11-20 13:37:13.334938] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:14.088 [2024-11-20 13:37:13.334970] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:17:14.088 [2024-11-20 13:37:13.334983] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:14.088 [2024-11-20 13:37:13.337482] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:14.088 [2024-11-20 13:37:13.337520] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:17:14.088 pt4 00:17:14.088 13:37:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.089 13:37:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:14.089 13:37:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:14.089 13:37:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:17:14.089 13:37:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.089 13:37:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:14.089 [2024-11-20 13:37:13.342773] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:14.089 [2024-11-20 13:37:13.344944] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:14.089 [2024-11-20 13:37:13.345142] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:14.089 [2024-11-20 13:37:13.345248] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:17:14.089 [2024-11-20 13:37:13.345518] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:17:14.089 [2024-11-20 13:37:13.345618] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:17:14.089 [2024-11-20 13:37:13.345909] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:17:14.089 [2024-11-20 13:37:13.346104] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:17:14.089 [2024-11-20 13:37:13.346124] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:17:14.089 [2024-11-20 13:37:13.346290] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:14.089 13:37:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.089 13:37:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:17:14.089 13:37:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:14.089 13:37:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:14.089 13:37:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:14.089 13:37:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:14.089 13:37:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:14.089 13:37:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:14.089 13:37:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:14.089 13:37:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:14.089 13:37:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:14.089 13:37:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:14.089 13:37:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:14.089 13:37:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.089 13:37:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:14.089 13:37:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.089 13:37:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:14.089 "name": "raid_bdev1", 00:17:14.089 "uuid": "5f905c8a-54bd-4d85-b672-3d43b8c9e1e0", 00:17:14.089 "strip_size_kb": 0, 00:17:14.089 "state": "online", 00:17:14.089 "raid_level": "raid1", 00:17:14.089 "superblock": true, 00:17:14.089 "num_base_bdevs": 4, 00:17:14.089 "num_base_bdevs_discovered": 4, 00:17:14.089 "num_base_bdevs_operational": 4, 00:17:14.089 "base_bdevs_list": [ 00:17:14.089 { 00:17:14.089 "name": "pt1", 00:17:14.089 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:14.089 "is_configured": true, 00:17:14.089 "data_offset": 2048, 00:17:14.089 "data_size": 63488 00:17:14.089 }, 00:17:14.089 { 00:17:14.089 "name": "pt2", 00:17:14.089 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:14.089 "is_configured": true, 00:17:14.089 "data_offset": 2048, 00:17:14.089 "data_size": 63488 00:17:14.089 }, 00:17:14.089 { 00:17:14.089 "name": "pt3", 00:17:14.089 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:14.089 "is_configured": true, 00:17:14.089 "data_offset": 2048, 00:17:14.089 "data_size": 63488 00:17:14.089 }, 00:17:14.089 { 00:17:14.089 "name": "pt4", 00:17:14.089 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:14.089 "is_configured": true, 00:17:14.089 "data_offset": 2048, 00:17:14.089 "data_size": 63488 00:17:14.089 } 00:17:14.089 ] 00:17:14.089 }' 00:17:14.089 13:37:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:14.089 13:37:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:14.349 13:37:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:17:14.349 13:37:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:17:14.349 13:37:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:14.349 13:37:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:14.349 13:37:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:17:14.349 13:37:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:14.349 13:37:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:14.349 13:37:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:14.349 13:37:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.349 13:37:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:14.349 [2024-11-20 13:37:13.750739] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:14.349 13:37:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.349 13:37:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:14.349 "name": "raid_bdev1", 00:17:14.349 "aliases": [ 00:17:14.349 "5f905c8a-54bd-4d85-b672-3d43b8c9e1e0" 00:17:14.349 ], 00:17:14.349 "product_name": "Raid Volume", 00:17:14.349 "block_size": 512, 00:17:14.349 "num_blocks": 63488, 00:17:14.349 "uuid": "5f905c8a-54bd-4d85-b672-3d43b8c9e1e0", 00:17:14.349 "assigned_rate_limits": { 00:17:14.349 "rw_ios_per_sec": 0, 00:17:14.349 "rw_mbytes_per_sec": 0, 00:17:14.349 "r_mbytes_per_sec": 0, 00:17:14.349 "w_mbytes_per_sec": 0 00:17:14.349 }, 00:17:14.349 "claimed": false, 00:17:14.349 "zoned": false, 00:17:14.349 "supported_io_types": { 00:17:14.349 "read": true, 00:17:14.349 "write": true, 00:17:14.349 "unmap": false, 00:17:14.349 "flush": false, 00:17:14.349 "reset": true, 00:17:14.349 "nvme_admin": false, 00:17:14.349 "nvme_io": false, 00:17:14.349 "nvme_io_md": false, 00:17:14.349 "write_zeroes": true, 00:17:14.349 "zcopy": false, 00:17:14.349 "get_zone_info": false, 00:17:14.349 "zone_management": false, 00:17:14.349 "zone_append": false, 00:17:14.349 "compare": false, 00:17:14.349 "compare_and_write": false, 00:17:14.349 "abort": false, 00:17:14.349 "seek_hole": false, 00:17:14.349 "seek_data": false, 00:17:14.349 "copy": false, 00:17:14.349 "nvme_iov_md": false 00:17:14.349 }, 00:17:14.349 "memory_domains": [ 00:17:14.349 { 00:17:14.349 "dma_device_id": "system", 00:17:14.349 "dma_device_type": 1 00:17:14.349 }, 00:17:14.349 { 00:17:14.349 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:14.349 "dma_device_type": 2 00:17:14.349 }, 00:17:14.349 { 00:17:14.349 "dma_device_id": "system", 00:17:14.349 "dma_device_type": 1 00:17:14.349 }, 00:17:14.349 { 00:17:14.349 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:14.349 "dma_device_type": 2 00:17:14.349 }, 00:17:14.349 { 00:17:14.349 "dma_device_id": "system", 00:17:14.349 "dma_device_type": 1 00:17:14.349 }, 00:17:14.349 { 00:17:14.349 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:14.349 "dma_device_type": 2 00:17:14.349 }, 00:17:14.349 { 00:17:14.349 "dma_device_id": "system", 00:17:14.349 "dma_device_type": 1 00:17:14.349 }, 00:17:14.349 { 00:17:14.349 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:14.349 "dma_device_type": 2 00:17:14.349 } 00:17:14.349 ], 00:17:14.349 "driver_specific": { 00:17:14.349 "raid": { 00:17:14.349 "uuid": "5f905c8a-54bd-4d85-b672-3d43b8c9e1e0", 00:17:14.349 "strip_size_kb": 0, 00:17:14.349 "state": "online", 00:17:14.349 "raid_level": "raid1", 00:17:14.349 "superblock": true, 00:17:14.349 "num_base_bdevs": 4, 00:17:14.349 "num_base_bdevs_discovered": 4, 00:17:14.349 "num_base_bdevs_operational": 4, 00:17:14.349 "base_bdevs_list": [ 00:17:14.349 { 00:17:14.349 "name": "pt1", 00:17:14.349 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:14.349 "is_configured": true, 00:17:14.349 "data_offset": 2048, 00:17:14.349 "data_size": 63488 00:17:14.349 }, 00:17:14.349 { 00:17:14.349 "name": "pt2", 00:17:14.350 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:14.350 "is_configured": true, 00:17:14.350 "data_offset": 2048, 00:17:14.350 "data_size": 63488 00:17:14.350 }, 00:17:14.350 { 00:17:14.350 "name": "pt3", 00:17:14.350 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:14.350 "is_configured": true, 00:17:14.350 "data_offset": 2048, 00:17:14.350 "data_size": 63488 00:17:14.350 }, 00:17:14.350 { 00:17:14.350 "name": "pt4", 00:17:14.350 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:14.350 "is_configured": true, 00:17:14.350 "data_offset": 2048, 00:17:14.350 "data_size": 63488 00:17:14.350 } 00:17:14.350 ] 00:17:14.350 } 00:17:14.350 } 00:17:14.350 }' 00:17:14.350 13:37:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:14.350 13:37:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:17:14.350 pt2 00:17:14.350 pt3 00:17:14.350 pt4' 00:17:14.350 13:37:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:14.670 13:37:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:17:14.670 13:37:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:14.670 13:37:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:17:14.670 13:37:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.670 13:37:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:14.670 13:37:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:14.670 13:37:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.670 13:37:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:14.670 13:37:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:14.670 13:37:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:14.670 13:37:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:17:14.670 13:37:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.670 13:37:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:14.670 13:37:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:14.670 13:37:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.670 13:37:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:14.670 13:37:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:14.670 13:37:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:14.670 13:37:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:17:14.670 13:37:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.670 13:37:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:14.670 13:37:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:14.670 13:37:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.670 13:37:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:14.670 13:37:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:14.670 13:37:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:14.670 13:37:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:14.670 13:37:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:17:14.670 13:37:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.670 13:37:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:14.670 13:37:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.670 13:37:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:14.670 13:37:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:14.670 13:37:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:14.670 13:37:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.670 13:37:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:14.670 13:37:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:17:14.670 [2024-11-20 13:37:14.034692] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:14.670 13:37:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.670 13:37:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=5f905c8a-54bd-4d85-b672-3d43b8c9e1e0 00:17:14.670 13:37:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 5f905c8a-54bd-4d85-b672-3d43b8c9e1e0 ']' 00:17:14.670 13:37:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:14.670 13:37:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.670 13:37:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:14.670 [2024-11-20 13:37:14.078413] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:14.670 [2024-11-20 13:37:14.078584] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:14.670 [2024-11-20 13:37:14.078690] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:14.670 [2024-11-20 13:37:14.078778] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:14.670 [2024-11-20 13:37:14.078798] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:17:14.670 13:37:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.670 13:37:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:14.670 13:37:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:17:14.670 13:37:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.670 13:37:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:14.670 13:37:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.670 13:37:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:17:14.670 13:37:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:17:14.670 13:37:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:14.670 13:37:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:17:14.670 13:37:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.670 13:37:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:14.670 13:37:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.670 13:37:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:14.670 13:37:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:17:14.670 13:37:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.670 13:37:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:14.670 13:37:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.670 13:37:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:14.670 13:37:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:17:14.670 13:37:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.670 13:37:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:14.932 13:37:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.932 13:37:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:14.932 13:37:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:17:14.932 13:37:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.932 13:37:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:14.932 13:37:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.932 13:37:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:17:14.932 13:37:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.932 13:37:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:14.932 13:37:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:17:14.932 13:37:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.932 13:37:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:17:14.932 13:37:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:17:14.932 13:37:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:17:14.932 13:37:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:17:14.932 13:37:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:17:14.932 13:37:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:14.932 13:37:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:17:14.932 13:37:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:14.932 13:37:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:17:14.932 13:37:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.932 13:37:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:14.932 [2024-11-20 13:37:14.226431] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:17:14.932 [2024-11-20 13:37:14.228781] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:17:14.932 [2024-11-20 13:37:14.228834] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:17:14.932 [2024-11-20 13:37:14.228873] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:17:14.932 [2024-11-20 13:37:14.228924] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:17:14.932 [2024-11-20 13:37:14.228982] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:17:14.932 [2024-11-20 13:37:14.229005] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:17:14.932 [2024-11-20 13:37:14.229028] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:17:14.932 [2024-11-20 13:37:14.229046] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:14.932 [2024-11-20 13:37:14.229070] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:17:14.932 request: 00:17:14.932 { 00:17:14.932 "name": "raid_bdev1", 00:17:14.932 "raid_level": "raid1", 00:17:14.932 "base_bdevs": [ 00:17:14.932 "malloc1", 00:17:14.932 "malloc2", 00:17:14.932 "malloc3", 00:17:14.932 "malloc4" 00:17:14.932 ], 00:17:14.932 "superblock": false, 00:17:14.932 "method": "bdev_raid_create", 00:17:14.932 "req_id": 1 00:17:14.932 } 00:17:14.932 Got JSON-RPC error response 00:17:14.932 response: 00:17:14.932 { 00:17:14.932 "code": -17, 00:17:14.932 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:17:14.932 } 00:17:14.932 13:37:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:17:14.932 13:37:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:17:14.932 13:37:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:14.932 13:37:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:14.932 13:37:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:14.932 13:37:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:14.932 13:37:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:17:14.932 13:37:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.932 13:37:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:14.932 13:37:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.932 13:37:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:17:14.932 13:37:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:17:14.932 13:37:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:14.932 13:37:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.932 13:37:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:14.932 [2024-11-20 13:37:14.274397] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:14.932 [2024-11-20 13:37:14.274595] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:14.932 [2024-11-20 13:37:14.274652] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:17:14.932 [2024-11-20 13:37:14.274768] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:14.932 [2024-11-20 13:37:14.277391] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:14.932 [2024-11-20 13:37:14.277540] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:14.932 [2024-11-20 13:37:14.277698] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:17:14.932 [2024-11-20 13:37:14.277873] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:14.932 pt1 00:17:14.932 13:37:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.932 13:37:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:17:14.932 13:37:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:14.932 13:37:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:14.932 13:37:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:14.932 13:37:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:14.932 13:37:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:14.932 13:37:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:14.932 13:37:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:14.932 13:37:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:14.932 13:37:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:14.932 13:37:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:14.932 13:37:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.932 13:37:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:14.933 13:37:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:14.933 13:37:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.933 13:37:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:14.933 "name": "raid_bdev1", 00:17:14.933 "uuid": "5f905c8a-54bd-4d85-b672-3d43b8c9e1e0", 00:17:14.933 "strip_size_kb": 0, 00:17:14.933 "state": "configuring", 00:17:14.933 "raid_level": "raid1", 00:17:14.933 "superblock": true, 00:17:14.933 "num_base_bdevs": 4, 00:17:14.933 "num_base_bdevs_discovered": 1, 00:17:14.933 "num_base_bdevs_operational": 4, 00:17:14.933 "base_bdevs_list": [ 00:17:14.933 { 00:17:14.933 "name": "pt1", 00:17:14.933 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:14.933 "is_configured": true, 00:17:14.933 "data_offset": 2048, 00:17:14.933 "data_size": 63488 00:17:14.933 }, 00:17:14.933 { 00:17:14.933 "name": null, 00:17:14.933 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:14.933 "is_configured": false, 00:17:14.933 "data_offset": 2048, 00:17:14.933 "data_size": 63488 00:17:14.933 }, 00:17:14.933 { 00:17:14.933 "name": null, 00:17:14.933 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:14.933 "is_configured": false, 00:17:14.933 "data_offset": 2048, 00:17:14.933 "data_size": 63488 00:17:14.933 }, 00:17:14.933 { 00:17:14.933 "name": null, 00:17:14.933 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:14.933 "is_configured": false, 00:17:14.933 "data_offset": 2048, 00:17:14.933 "data_size": 63488 00:17:14.933 } 00:17:14.933 ] 00:17:14.933 }' 00:17:14.933 13:37:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:14.933 13:37:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:15.500 13:37:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:17:15.500 13:37:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:15.500 13:37:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.500 13:37:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:15.500 [2024-11-20 13:37:14.694429] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:15.500 [2024-11-20 13:37:14.694633] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:15.500 [2024-11-20 13:37:14.694667] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:17:15.500 [2024-11-20 13:37:14.694683] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:15.500 [2024-11-20 13:37:14.695149] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:15.500 [2024-11-20 13:37:14.695174] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:15.500 [2024-11-20 13:37:14.695263] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:15.500 [2024-11-20 13:37:14.695295] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:15.500 pt2 00:17:15.500 13:37:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.500 13:37:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:17:15.500 13:37:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.500 13:37:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:15.501 [2024-11-20 13:37:14.702427] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:17:15.501 13:37:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.501 13:37:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:17:15.501 13:37:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:15.501 13:37:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:15.501 13:37:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:15.501 13:37:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:15.501 13:37:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:15.501 13:37:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:15.501 13:37:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:15.501 13:37:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:15.501 13:37:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:15.501 13:37:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:15.501 13:37:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:15.501 13:37:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.501 13:37:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:15.501 13:37:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.501 13:37:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:15.501 "name": "raid_bdev1", 00:17:15.501 "uuid": "5f905c8a-54bd-4d85-b672-3d43b8c9e1e0", 00:17:15.501 "strip_size_kb": 0, 00:17:15.501 "state": "configuring", 00:17:15.501 "raid_level": "raid1", 00:17:15.501 "superblock": true, 00:17:15.501 "num_base_bdevs": 4, 00:17:15.501 "num_base_bdevs_discovered": 1, 00:17:15.501 "num_base_bdevs_operational": 4, 00:17:15.501 "base_bdevs_list": [ 00:17:15.501 { 00:17:15.501 "name": "pt1", 00:17:15.501 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:15.501 "is_configured": true, 00:17:15.501 "data_offset": 2048, 00:17:15.501 "data_size": 63488 00:17:15.501 }, 00:17:15.501 { 00:17:15.501 "name": null, 00:17:15.501 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:15.501 "is_configured": false, 00:17:15.501 "data_offset": 0, 00:17:15.501 "data_size": 63488 00:17:15.501 }, 00:17:15.501 { 00:17:15.501 "name": null, 00:17:15.501 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:15.501 "is_configured": false, 00:17:15.501 "data_offset": 2048, 00:17:15.501 "data_size": 63488 00:17:15.501 }, 00:17:15.501 { 00:17:15.501 "name": null, 00:17:15.501 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:15.501 "is_configured": false, 00:17:15.501 "data_offset": 2048, 00:17:15.501 "data_size": 63488 00:17:15.501 } 00:17:15.501 ] 00:17:15.501 }' 00:17:15.501 13:37:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:15.501 13:37:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:15.760 13:37:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:17:15.760 13:37:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:15.760 13:37:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:15.760 13:37:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.760 13:37:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:15.760 [2024-11-20 13:37:15.130423] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:15.760 [2024-11-20 13:37:15.130630] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:15.760 [2024-11-20 13:37:15.130684] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:17:15.760 [2024-11-20 13:37:15.130697] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:15.760 [2024-11-20 13:37:15.131204] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:15.760 [2024-11-20 13:37:15.131226] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:15.760 [2024-11-20 13:37:15.131316] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:15.760 [2024-11-20 13:37:15.131339] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:15.760 pt2 00:17:15.760 13:37:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.760 13:37:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:17:15.760 13:37:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:15.760 13:37:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:17:15.760 13:37:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.760 13:37:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:15.760 [2024-11-20 13:37:15.138415] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:17:15.760 [2024-11-20 13:37:15.138479] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:15.760 [2024-11-20 13:37:15.138506] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:17:15.760 [2024-11-20 13:37:15.138520] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:15.760 [2024-11-20 13:37:15.138956] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:15.760 [2024-11-20 13:37:15.138977] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:17:15.760 [2024-11-20 13:37:15.139075] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:17:15.760 [2024-11-20 13:37:15.139098] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:15.760 pt3 00:17:15.760 13:37:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.760 13:37:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:17:15.760 13:37:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:15.760 13:37:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:17:15.760 13:37:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.760 13:37:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:15.760 [2024-11-20 13:37:15.146389] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:17:15.760 [2024-11-20 13:37:15.146578] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:15.760 [2024-11-20 13:37:15.146729] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:17:15.760 [2024-11-20 13:37:15.146826] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:15.760 [2024-11-20 13:37:15.147324] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:15.760 [2024-11-20 13:37:15.147465] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:17:15.760 [2024-11-20 13:37:15.147564] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:17:15.760 [2024-11-20 13:37:15.147595] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:17:15.760 [2024-11-20 13:37:15.147740] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:17:15.760 [2024-11-20 13:37:15.147751] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:17:15.760 [2024-11-20 13:37:15.148028] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:17:15.760 [2024-11-20 13:37:15.148314] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:17:15.760 [2024-11-20 13:37:15.148410] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:17:15.760 [2024-11-20 13:37:15.148652] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:15.760 pt4 00:17:15.760 13:37:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.760 13:37:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:17:15.760 13:37:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:15.760 13:37:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:17:15.760 13:37:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:15.760 13:37:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:15.760 13:37:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:15.760 13:37:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:15.760 13:37:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:15.760 13:37:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:15.760 13:37:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:15.760 13:37:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:15.760 13:37:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:15.760 13:37:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:15.760 13:37:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:15.760 13:37:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.760 13:37:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:15.760 13:37:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.760 13:37:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:15.760 "name": "raid_bdev1", 00:17:15.760 "uuid": "5f905c8a-54bd-4d85-b672-3d43b8c9e1e0", 00:17:15.760 "strip_size_kb": 0, 00:17:15.760 "state": "online", 00:17:15.760 "raid_level": "raid1", 00:17:15.760 "superblock": true, 00:17:15.760 "num_base_bdevs": 4, 00:17:15.760 "num_base_bdevs_discovered": 4, 00:17:15.760 "num_base_bdevs_operational": 4, 00:17:15.760 "base_bdevs_list": [ 00:17:15.760 { 00:17:15.760 "name": "pt1", 00:17:15.761 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:15.761 "is_configured": true, 00:17:15.761 "data_offset": 2048, 00:17:15.761 "data_size": 63488 00:17:15.761 }, 00:17:15.761 { 00:17:15.761 "name": "pt2", 00:17:15.761 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:15.761 "is_configured": true, 00:17:15.761 "data_offset": 2048, 00:17:15.761 "data_size": 63488 00:17:15.761 }, 00:17:15.761 { 00:17:15.761 "name": "pt3", 00:17:15.761 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:15.761 "is_configured": true, 00:17:15.761 "data_offset": 2048, 00:17:15.761 "data_size": 63488 00:17:15.761 }, 00:17:15.761 { 00:17:15.761 "name": "pt4", 00:17:15.761 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:15.761 "is_configured": true, 00:17:15.761 "data_offset": 2048, 00:17:15.761 "data_size": 63488 00:17:15.761 } 00:17:15.761 ] 00:17:15.761 }' 00:17:15.761 13:37:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:15.761 13:37:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:16.327 13:37:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:17:16.327 13:37:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:17:16.327 13:37:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:16.327 13:37:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:16.327 13:37:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:17:16.327 13:37:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:16.327 13:37:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:16.327 13:37:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:16.327 13:37:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.327 13:37:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:16.327 [2024-11-20 13:37:15.550749] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:16.327 13:37:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.327 13:37:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:16.327 "name": "raid_bdev1", 00:17:16.327 "aliases": [ 00:17:16.327 "5f905c8a-54bd-4d85-b672-3d43b8c9e1e0" 00:17:16.327 ], 00:17:16.327 "product_name": "Raid Volume", 00:17:16.327 "block_size": 512, 00:17:16.327 "num_blocks": 63488, 00:17:16.328 "uuid": "5f905c8a-54bd-4d85-b672-3d43b8c9e1e0", 00:17:16.328 "assigned_rate_limits": { 00:17:16.328 "rw_ios_per_sec": 0, 00:17:16.328 "rw_mbytes_per_sec": 0, 00:17:16.328 "r_mbytes_per_sec": 0, 00:17:16.328 "w_mbytes_per_sec": 0 00:17:16.328 }, 00:17:16.328 "claimed": false, 00:17:16.328 "zoned": false, 00:17:16.328 "supported_io_types": { 00:17:16.328 "read": true, 00:17:16.328 "write": true, 00:17:16.328 "unmap": false, 00:17:16.328 "flush": false, 00:17:16.328 "reset": true, 00:17:16.328 "nvme_admin": false, 00:17:16.328 "nvme_io": false, 00:17:16.328 "nvme_io_md": false, 00:17:16.328 "write_zeroes": true, 00:17:16.328 "zcopy": false, 00:17:16.328 "get_zone_info": false, 00:17:16.328 "zone_management": false, 00:17:16.328 "zone_append": false, 00:17:16.328 "compare": false, 00:17:16.328 "compare_and_write": false, 00:17:16.328 "abort": false, 00:17:16.328 "seek_hole": false, 00:17:16.328 "seek_data": false, 00:17:16.328 "copy": false, 00:17:16.328 "nvme_iov_md": false 00:17:16.328 }, 00:17:16.328 "memory_domains": [ 00:17:16.328 { 00:17:16.328 "dma_device_id": "system", 00:17:16.328 "dma_device_type": 1 00:17:16.328 }, 00:17:16.328 { 00:17:16.328 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:16.328 "dma_device_type": 2 00:17:16.328 }, 00:17:16.328 { 00:17:16.328 "dma_device_id": "system", 00:17:16.328 "dma_device_type": 1 00:17:16.328 }, 00:17:16.328 { 00:17:16.328 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:16.328 "dma_device_type": 2 00:17:16.328 }, 00:17:16.328 { 00:17:16.328 "dma_device_id": "system", 00:17:16.328 "dma_device_type": 1 00:17:16.328 }, 00:17:16.328 { 00:17:16.328 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:16.328 "dma_device_type": 2 00:17:16.328 }, 00:17:16.328 { 00:17:16.328 "dma_device_id": "system", 00:17:16.328 "dma_device_type": 1 00:17:16.328 }, 00:17:16.328 { 00:17:16.328 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:16.328 "dma_device_type": 2 00:17:16.328 } 00:17:16.328 ], 00:17:16.328 "driver_specific": { 00:17:16.328 "raid": { 00:17:16.328 "uuid": "5f905c8a-54bd-4d85-b672-3d43b8c9e1e0", 00:17:16.328 "strip_size_kb": 0, 00:17:16.328 "state": "online", 00:17:16.328 "raid_level": "raid1", 00:17:16.328 "superblock": true, 00:17:16.328 "num_base_bdevs": 4, 00:17:16.328 "num_base_bdevs_discovered": 4, 00:17:16.328 "num_base_bdevs_operational": 4, 00:17:16.328 "base_bdevs_list": [ 00:17:16.328 { 00:17:16.328 "name": "pt1", 00:17:16.328 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:16.328 "is_configured": true, 00:17:16.328 "data_offset": 2048, 00:17:16.328 "data_size": 63488 00:17:16.328 }, 00:17:16.328 { 00:17:16.328 "name": "pt2", 00:17:16.328 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:16.328 "is_configured": true, 00:17:16.328 "data_offset": 2048, 00:17:16.328 "data_size": 63488 00:17:16.328 }, 00:17:16.328 { 00:17:16.328 "name": "pt3", 00:17:16.328 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:16.328 "is_configured": true, 00:17:16.328 "data_offset": 2048, 00:17:16.328 "data_size": 63488 00:17:16.328 }, 00:17:16.328 { 00:17:16.328 "name": "pt4", 00:17:16.328 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:16.328 "is_configured": true, 00:17:16.328 "data_offset": 2048, 00:17:16.328 "data_size": 63488 00:17:16.328 } 00:17:16.328 ] 00:17:16.328 } 00:17:16.328 } 00:17:16.328 }' 00:17:16.328 13:37:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:16.328 13:37:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:17:16.328 pt2 00:17:16.328 pt3 00:17:16.328 pt4' 00:17:16.328 13:37:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:16.328 13:37:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:17:16.328 13:37:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:16.328 13:37:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:17:16.328 13:37:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.328 13:37:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:16.328 13:37:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:16.328 13:37:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.328 13:37:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:16.328 13:37:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:16.328 13:37:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:16.328 13:37:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:16.328 13:37:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:17:16.328 13:37:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.328 13:37:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:16.328 13:37:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.328 13:37:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:16.328 13:37:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:16.328 13:37:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:16.328 13:37:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:16.328 13:37:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:17:16.328 13:37:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.328 13:37:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:16.328 13:37:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.328 13:37:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:16.328 13:37:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:16.328 13:37:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:16.328 13:37:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:17:16.328 13:37:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.328 13:37:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:16.328 13:37:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:16.587 13:37:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.587 13:37:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:16.587 13:37:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:16.587 13:37:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:16.587 13:37:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.587 13:37:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:16.587 13:37:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:17:16.587 [2024-11-20 13:37:15.858846] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:16.587 13:37:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.587 13:37:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 5f905c8a-54bd-4d85-b672-3d43b8c9e1e0 '!=' 5f905c8a-54bd-4d85-b672-3d43b8c9e1e0 ']' 00:17:16.587 13:37:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:17:16.587 13:37:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:16.587 13:37:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:17:16.587 13:37:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:17:16.587 13:37:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.587 13:37:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:16.587 [2024-11-20 13:37:15.902489] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:17:16.587 13:37:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.587 13:37:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:17:16.587 13:37:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:16.587 13:37:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:16.587 13:37:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:16.587 13:37:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:16.587 13:37:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:16.587 13:37:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:16.587 13:37:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:16.587 13:37:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:16.587 13:37:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:16.587 13:37:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:16.587 13:37:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:16.587 13:37:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.587 13:37:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:16.587 13:37:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.587 13:37:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:16.587 "name": "raid_bdev1", 00:17:16.587 "uuid": "5f905c8a-54bd-4d85-b672-3d43b8c9e1e0", 00:17:16.587 "strip_size_kb": 0, 00:17:16.587 "state": "online", 00:17:16.587 "raid_level": "raid1", 00:17:16.587 "superblock": true, 00:17:16.587 "num_base_bdevs": 4, 00:17:16.587 "num_base_bdevs_discovered": 3, 00:17:16.587 "num_base_bdevs_operational": 3, 00:17:16.587 "base_bdevs_list": [ 00:17:16.587 { 00:17:16.587 "name": null, 00:17:16.587 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:16.587 "is_configured": false, 00:17:16.587 "data_offset": 0, 00:17:16.587 "data_size": 63488 00:17:16.587 }, 00:17:16.587 { 00:17:16.587 "name": "pt2", 00:17:16.587 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:16.587 "is_configured": true, 00:17:16.587 "data_offset": 2048, 00:17:16.587 "data_size": 63488 00:17:16.587 }, 00:17:16.587 { 00:17:16.587 "name": "pt3", 00:17:16.587 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:16.587 "is_configured": true, 00:17:16.587 "data_offset": 2048, 00:17:16.587 "data_size": 63488 00:17:16.587 }, 00:17:16.587 { 00:17:16.587 "name": "pt4", 00:17:16.587 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:16.587 "is_configured": true, 00:17:16.587 "data_offset": 2048, 00:17:16.587 "data_size": 63488 00:17:16.587 } 00:17:16.587 ] 00:17:16.587 }' 00:17:16.587 13:37:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:16.587 13:37:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:16.846 13:37:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:16.846 13:37:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.846 13:37:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:16.846 [2024-11-20 13:37:16.322432] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:16.846 [2024-11-20 13:37:16.322476] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:16.846 [2024-11-20 13:37:16.322570] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:16.846 [2024-11-20 13:37:16.322664] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:16.846 [2024-11-20 13:37:16.322677] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:17:16.846 13:37:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.105 13:37:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:17.105 13:37:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.105 13:37:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:17:17.105 13:37:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:17.105 13:37:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.105 13:37:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:17:17.105 13:37:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:17:17.105 13:37:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:17:17.105 13:37:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:17:17.105 13:37:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:17:17.105 13:37:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.105 13:37:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:17.105 13:37:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.105 13:37:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:17:17.105 13:37:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:17:17.105 13:37:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:17:17.105 13:37:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.106 13:37:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:17.106 13:37:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.106 13:37:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:17:17.106 13:37:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:17:17.106 13:37:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 00:17:17.106 13:37:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.106 13:37:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:17.106 13:37:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.106 13:37:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:17:17.106 13:37:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:17:17.106 13:37:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:17:17.106 13:37:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:17:17.106 13:37:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:17.106 13:37:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.106 13:37:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:17.106 [2024-11-20 13:37:16.414429] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:17.106 [2024-11-20 13:37:16.414500] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:17.106 [2024-11-20 13:37:16.414535] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:17:17.106 [2024-11-20 13:37:16.414550] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:17.106 [2024-11-20 13:37:16.417154] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:17.106 [2024-11-20 13:37:16.417195] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:17.106 [2024-11-20 13:37:16.417316] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:17.106 [2024-11-20 13:37:16.417388] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:17.106 pt2 00:17:17.106 13:37:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.106 13:37:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:17:17.106 13:37:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:17.106 13:37:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:17.106 13:37:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:17.106 13:37:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:17.106 13:37:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:17.106 13:37:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:17.106 13:37:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:17.106 13:37:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:17.106 13:37:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:17.106 13:37:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:17.106 13:37:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:17.106 13:37:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.106 13:37:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:17.106 13:37:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.106 13:37:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:17.106 "name": "raid_bdev1", 00:17:17.106 "uuid": "5f905c8a-54bd-4d85-b672-3d43b8c9e1e0", 00:17:17.106 "strip_size_kb": 0, 00:17:17.106 "state": "configuring", 00:17:17.106 "raid_level": "raid1", 00:17:17.106 "superblock": true, 00:17:17.106 "num_base_bdevs": 4, 00:17:17.106 "num_base_bdevs_discovered": 1, 00:17:17.106 "num_base_bdevs_operational": 3, 00:17:17.106 "base_bdevs_list": [ 00:17:17.106 { 00:17:17.106 "name": null, 00:17:17.106 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:17.106 "is_configured": false, 00:17:17.106 "data_offset": 2048, 00:17:17.106 "data_size": 63488 00:17:17.106 }, 00:17:17.106 { 00:17:17.106 "name": "pt2", 00:17:17.106 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:17.106 "is_configured": true, 00:17:17.106 "data_offset": 2048, 00:17:17.106 "data_size": 63488 00:17:17.106 }, 00:17:17.106 { 00:17:17.106 "name": null, 00:17:17.106 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:17.106 "is_configured": false, 00:17:17.106 "data_offset": 2048, 00:17:17.106 "data_size": 63488 00:17:17.106 }, 00:17:17.106 { 00:17:17.106 "name": null, 00:17:17.106 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:17.106 "is_configured": false, 00:17:17.106 "data_offset": 2048, 00:17:17.106 "data_size": 63488 00:17:17.106 } 00:17:17.106 ] 00:17:17.106 }' 00:17:17.106 13:37:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:17.106 13:37:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:17.365 13:37:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:17:17.365 13:37:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:17:17.365 13:37:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:17:17.365 13:37:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.365 13:37:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:17.624 [2024-11-20 13:37:16.854438] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:17:17.624 [2024-11-20 13:37:16.854625] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:17.624 [2024-11-20 13:37:16.854688] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:17:17.624 [2024-11-20 13:37:16.854776] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:17.624 [2024-11-20 13:37:16.855264] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:17.624 [2024-11-20 13:37:16.855291] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:17:17.624 [2024-11-20 13:37:16.855385] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:17:17.624 [2024-11-20 13:37:16.855406] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:17.624 pt3 00:17:17.624 13:37:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.624 13:37:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:17:17.624 13:37:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:17.624 13:37:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:17.624 13:37:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:17.624 13:37:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:17.624 13:37:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:17.624 13:37:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:17.624 13:37:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:17.624 13:37:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:17.624 13:37:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:17.624 13:37:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:17.624 13:37:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.624 13:37:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:17.624 13:37:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:17.624 13:37:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.624 13:37:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:17.624 "name": "raid_bdev1", 00:17:17.624 "uuid": "5f905c8a-54bd-4d85-b672-3d43b8c9e1e0", 00:17:17.624 "strip_size_kb": 0, 00:17:17.624 "state": "configuring", 00:17:17.624 "raid_level": "raid1", 00:17:17.624 "superblock": true, 00:17:17.624 "num_base_bdevs": 4, 00:17:17.624 "num_base_bdevs_discovered": 2, 00:17:17.624 "num_base_bdevs_operational": 3, 00:17:17.624 "base_bdevs_list": [ 00:17:17.624 { 00:17:17.624 "name": null, 00:17:17.624 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:17.624 "is_configured": false, 00:17:17.624 "data_offset": 2048, 00:17:17.624 "data_size": 63488 00:17:17.624 }, 00:17:17.624 { 00:17:17.624 "name": "pt2", 00:17:17.624 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:17.624 "is_configured": true, 00:17:17.624 "data_offset": 2048, 00:17:17.624 "data_size": 63488 00:17:17.624 }, 00:17:17.624 { 00:17:17.624 "name": "pt3", 00:17:17.624 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:17.624 "is_configured": true, 00:17:17.624 "data_offset": 2048, 00:17:17.624 "data_size": 63488 00:17:17.624 }, 00:17:17.624 { 00:17:17.624 "name": null, 00:17:17.624 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:17.624 "is_configured": false, 00:17:17.624 "data_offset": 2048, 00:17:17.624 "data_size": 63488 00:17:17.624 } 00:17:17.624 ] 00:17:17.624 }' 00:17:17.624 13:37:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:17.624 13:37:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:17.883 13:37:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:17:17.883 13:37:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:17:17.883 13:37:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:17:17.883 13:37:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:17:17.883 13:37:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.883 13:37:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:17.883 [2024-11-20 13:37:17.306229] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:17:17.883 [2024-11-20 13:37:17.306433] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:17.883 [2024-11-20 13:37:17.306471] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:17:17.883 [2024-11-20 13:37:17.306483] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:17.883 [2024-11-20 13:37:17.306937] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:17.883 [2024-11-20 13:37:17.306957] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:17:17.883 [2024-11-20 13:37:17.307040] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:17:17.883 [2024-11-20 13:37:17.307084] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:17:17.883 [2024-11-20 13:37:17.307210] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:17:17.883 [2024-11-20 13:37:17.307219] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:17:17.883 [2024-11-20 13:37:17.307471] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:17:17.883 [2024-11-20 13:37:17.307640] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:17:17.883 [2024-11-20 13:37:17.307655] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:17:17.883 [2024-11-20 13:37:17.307793] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:17.883 pt4 00:17:17.883 13:37:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.883 13:37:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:17:17.883 13:37:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:17.883 13:37:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:17.883 13:37:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:17.883 13:37:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:17.883 13:37:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:17.883 13:37:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:17.883 13:37:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:17.883 13:37:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:17.883 13:37:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:17.883 13:37:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:17.883 13:37:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:17.883 13:37:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.883 13:37:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:17.883 13:37:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.883 13:37:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:17.883 "name": "raid_bdev1", 00:17:17.883 "uuid": "5f905c8a-54bd-4d85-b672-3d43b8c9e1e0", 00:17:17.883 "strip_size_kb": 0, 00:17:17.883 "state": "online", 00:17:17.883 "raid_level": "raid1", 00:17:17.883 "superblock": true, 00:17:17.883 "num_base_bdevs": 4, 00:17:17.883 "num_base_bdevs_discovered": 3, 00:17:17.883 "num_base_bdevs_operational": 3, 00:17:17.883 "base_bdevs_list": [ 00:17:17.883 { 00:17:17.883 "name": null, 00:17:17.883 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:17.883 "is_configured": false, 00:17:17.883 "data_offset": 2048, 00:17:17.883 "data_size": 63488 00:17:17.884 }, 00:17:17.884 { 00:17:17.884 "name": "pt2", 00:17:17.884 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:17.884 "is_configured": true, 00:17:17.884 "data_offset": 2048, 00:17:17.884 "data_size": 63488 00:17:17.884 }, 00:17:17.884 { 00:17:17.884 "name": "pt3", 00:17:17.884 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:17.884 "is_configured": true, 00:17:17.884 "data_offset": 2048, 00:17:17.884 "data_size": 63488 00:17:17.884 }, 00:17:17.884 { 00:17:17.884 "name": "pt4", 00:17:17.884 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:17.884 "is_configured": true, 00:17:17.884 "data_offset": 2048, 00:17:17.884 "data_size": 63488 00:17:17.884 } 00:17:17.884 ] 00:17:17.884 }' 00:17:17.884 13:37:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:17.884 13:37:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:18.451 13:37:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:18.451 13:37:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.451 13:37:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:18.451 [2024-11-20 13:37:17.757532] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:18.451 [2024-11-20 13:37:17.757689] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:18.451 [2024-11-20 13:37:17.757856] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:18.451 [2024-11-20 13:37:17.757975] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:18.451 [2024-11-20 13:37:17.758201] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:17:18.451 13:37:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.451 13:37:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:18.451 13:37:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.451 13:37:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:17:18.451 13:37:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:18.451 13:37:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.451 13:37:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:17:18.451 13:37:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:17:18.451 13:37:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:17:18.451 13:37:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:17:18.451 13:37:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 00:17:18.451 13:37:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.451 13:37:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:18.451 13:37:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.451 13:37:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:18.451 13:37:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.451 13:37:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:18.451 [2024-11-20 13:37:17.829417] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:18.451 [2024-11-20 13:37:17.829631] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:18.451 [2024-11-20 13:37:17.829664] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:17:18.451 [2024-11-20 13:37:17.829682] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:18.451 [2024-11-20 13:37:17.832269] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:18.451 [2024-11-20 13:37:17.832361] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:18.451 [2024-11-20 13:37:17.832462] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:17:18.451 [2024-11-20 13:37:17.832510] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:18.451 [2024-11-20 13:37:17.832642] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:17:18.451 [2024-11-20 13:37:17.832659] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:18.451 [2024-11-20 13:37:17.832678] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:17:18.451 [2024-11-20 13:37:17.832757] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:18.451 [2024-11-20 13:37:17.832855] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:18.451 pt1 00:17:18.451 13:37:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.451 13:37:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:17:18.451 13:37:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:17:18.451 13:37:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:18.451 13:37:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:18.451 13:37:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:18.451 13:37:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:18.451 13:37:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:18.451 13:37:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:18.451 13:37:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:18.451 13:37:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:18.451 13:37:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:18.451 13:37:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:18.451 13:37:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:18.451 13:37:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.451 13:37:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:18.451 13:37:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.451 13:37:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:18.451 "name": "raid_bdev1", 00:17:18.451 "uuid": "5f905c8a-54bd-4d85-b672-3d43b8c9e1e0", 00:17:18.451 "strip_size_kb": 0, 00:17:18.451 "state": "configuring", 00:17:18.451 "raid_level": "raid1", 00:17:18.451 "superblock": true, 00:17:18.451 "num_base_bdevs": 4, 00:17:18.451 "num_base_bdevs_discovered": 2, 00:17:18.451 "num_base_bdevs_operational": 3, 00:17:18.451 "base_bdevs_list": [ 00:17:18.451 { 00:17:18.451 "name": null, 00:17:18.451 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:18.451 "is_configured": false, 00:17:18.451 "data_offset": 2048, 00:17:18.451 "data_size": 63488 00:17:18.451 }, 00:17:18.451 { 00:17:18.451 "name": "pt2", 00:17:18.451 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:18.451 "is_configured": true, 00:17:18.451 "data_offset": 2048, 00:17:18.451 "data_size": 63488 00:17:18.451 }, 00:17:18.451 { 00:17:18.451 "name": "pt3", 00:17:18.451 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:18.451 "is_configured": true, 00:17:18.451 "data_offset": 2048, 00:17:18.451 "data_size": 63488 00:17:18.451 }, 00:17:18.451 { 00:17:18.451 "name": null, 00:17:18.451 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:18.451 "is_configured": false, 00:17:18.451 "data_offset": 2048, 00:17:18.451 "data_size": 63488 00:17:18.451 } 00:17:18.451 ] 00:17:18.451 }' 00:17:18.451 13:37:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:18.451 13:37:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:19.019 13:37:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:17:19.019 13:37:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.019 13:37:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:19.019 13:37:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:17:19.019 13:37:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.019 13:37:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:17:19.019 13:37:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:17:19.019 13:37:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.019 13:37:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:19.019 [2024-11-20 13:37:18.293224] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:17:19.019 [2024-11-20 13:37:18.293492] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:19.019 [2024-11-20 13:37:18.293532] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:17:19.019 [2024-11-20 13:37:18.293544] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:19.019 [2024-11-20 13:37:18.294008] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:19.019 [2024-11-20 13:37:18.294029] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:17:19.019 [2024-11-20 13:37:18.294148] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:17:19.019 [2024-11-20 13:37:18.294173] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:17:19.019 [2024-11-20 13:37:18.294316] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:17:19.019 [2024-11-20 13:37:18.294368] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:17:19.019 [2024-11-20 13:37:18.294639] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:17:19.019 [2024-11-20 13:37:18.294782] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:17:19.019 [2024-11-20 13:37:18.294795] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:17:19.019 [2024-11-20 13:37:18.294930] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:19.019 pt4 00:17:19.019 13:37:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.019 13:37:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:17:19.019 13:37:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:19.019 13:37:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:19.019 13:37:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:19.019 13:37:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:19.019 13:37:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:19.019 13:37:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:19.019 13:37:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:19.019 13:37:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:19.019 13:37:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:19.019 13:37:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:19.019 13:37:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.019 13:37:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:19.019 13:37:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:19.019 13:37:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.019 13:37:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:19.019 "name": "raid_bdev1", 00:17:19.019 "uuid": "5f905c8a-54bd-4d85-b672-3d43b8c9e1e0", 00:17:19.019 "strip_size_kb": 0, 00:17:19.019 "state": "online", 00:17:19.019 "raid_level": "raid1", 00:17:19.019 "superblock": true, 00:17:19.019 "num_base_bdevs": 4, 00:17:19.019 "num_base_bdevs_discovered": 3, 00:17:19.020 "num_base_bdevs_operational": 3, 00:17:19.020 "base_bdevs_list": [ 00:17:19.020 { 00:17:19.020 "name": null, 00:17:19.020 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:19.020 "is_configured": false, 00:17:19.020 "data_offset": 2048, 00:17:19.020 "data_size": 63488 00:17:19.020 }, 00:17:19.020 { 00:17:19.020 "name": "pt2", 00:17:19.020 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:19.020 "is_configured": true, 00:17:19.020 "data_offset": 2048, 00:17:19.020 "data_size": 63488 00:17:19.020 }, 00:17:19.020 { 00:17:19.020 "name": "pt3", 00:17:19.020 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:19.020 "is_configured": true, 00:17:19.020 "data_offset": 2048, 00:17:19.020 "data_size": 63488 00:17:19.020 }, 00:17:19.020 { 00:17:19.020 "name": "pt4", 00:17:19.020 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:19.020 "is_configured": true, 00:17:19.020 "data_offset": 2048, 00:17:19.020 "data_size": 63488 00:17:19.020 } 00:17:19.020 ] 00:17:19.020 }' 00:17:19.020 13:37:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:19.020 13:37:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:19.279 13:37:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:17:19.279 13:37:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:17:19.279 13:37:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.279 13:37:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:19.279 13:37:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.279 13:37:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:17:19.279 13:37:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:17:19.279 13:37:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:19.279 13:37:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.279 13:37:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:19.279 [2024-11-20 13:37:18.753013] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:19.539 13:37:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.539 13:37:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 5f905c8a-54bd-4d85-b672-3d43b8c9e1e0 '!=' 5f905c8a-54bd-4d85-b672-3d43b8c9e1e0 ']' 00:17:19.539 13:37:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 74260 00:17:19.539 13:37:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 74260 ']' 00:17:19.539 13:37:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 74260 00:17:19.539 13:37:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:17:19.539 13:37:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:19.539 13:37:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74260 00:17:19.539 killing process with pid 74260 00:17:19.539 13:37:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:19.539 13:37:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:19.539 13:37:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74260' 00:17:19.539 13:37:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 74260 00:17:19.539 [2024-11-20 13:37:18.835861] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:19.539 [2024-11-20 13:37:18.835978] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:19.539 13:37:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 74260 00:17:19.539 [2024-11-20 13:37:18.836054] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:19.539 [2024-11-20 13:37:18.836083] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:17:19.892 [2024-11-20 13:37:19.237559] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:21.275 13:37:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:17:21.275 00:17:21.275 real 0m8.332s 00:17:21.275 user 0m13.162s 00:17:21.275 sys 0m1.576s 00:17:21.275 13:37:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:21.275 13:37:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:21.275 ************************************ 00:17:21.275 END TEST raid_superblock_test 00:17:21.275 ************************************ 00:17:21.275 13:37:20 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 4 read 00:17:21.275 13:37:20 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:17:21.275 13:37:20 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:21.275 13:37:20 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:21.275 ************************************ 00:17:21.275 START TEST raid_read_error_test 00:17:21.275 ************************************ 00:17:21.275 13:37:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 4 read 00:17:21.275 13:37:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:17:21.275 13:37:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:17:21.275 13:37:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:17:21.275 13:37:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:17:21.275 13:37:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:17:21.275 13:37:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:17:21.275 13:37:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:17:21.275 13:37:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:17:21.275 13:37:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:17:21.275 13:37:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:17:21.275 13:37:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:17:21.275 13:37:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:17:21.275 13:37:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:17:21.275 13:37:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:17:21.275 13:37:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:17:21.275 13:37:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:17:21.275 13:37:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:17:21.275 13:37:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:17:21.275 13:37:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:17:21.275 13:37:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:17:21.275 13:37:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:17:21.275 13:37:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:17:21.275 13:37:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:17:21.275 13:37:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:17:21.275 13:37:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:17:21.275 13:37:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:17:21.275 13:37:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:17:21.275 13:37:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.D7zskOPKUc 00:17:21.275 13:37:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=74747 00:17:21.275 13:37:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 74747 00:17:21.275 13:37:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:17:21.275 13:37:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 74747 ']' 00:17:21.275 13:37:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:21.275 13:37:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:21.275 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:21.275 13:37:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:21.275 13:37:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:21.275 13:37:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:21.275 [2024-11-20 13:37:20.582954] Starting SPDK v25.01-pre git sha1 82b85d9ca / DPDK 24.03.0 initialization... 00:17:21.275 [2024-11-20 13:37:20.583559] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74747 ] 00:17:21.534 [2024-11-20 13:37:20.761787] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:21.534 [2024-11-20 13:37:20.881318] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:21.794 [2024-11-20 13:37:21.121863] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:21.794 [2024-11-20 13:37:21.121920] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:22.054 13:37:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:22.055 13:37:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:17:22.055 13:37:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:17:22.055 13:37:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:17:22.055 13:37:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.055 13:37:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:22.055 BaseBdev1_malloc 00:17:22.055 13:37:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.055 13:37:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:17:22.055 13:37:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.055 13:37:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:22.055 true 00:17:22.055 13:37:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.055 13:37:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:17:22.055 13:37:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.055 13:37:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:22.055 [2024-11-20 13:37:21.488624] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:17:22.055 [2024-11-20 13:37:21.488843] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:22.055 [2024-11-20 13:37:21.488879] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:17:22.055 [2024-11-20 13:37:21.488896] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:22.055 [2024-11-20 13:37:21.491447] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:22.055 [2024-11-20 13:37:21.491495] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:22.055 BaseBdev1 00:17:22.055 13:37:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.055 13:37:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:17:22.055 13:37:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:17:22.055 13:37:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.055 13:37:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:22.055 BaseBdev2_malloc 00:17:22.316 13:37:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.316 13:37:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:17:22.316 13:37:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.316 13:37:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:22.316 true 00:17:22.316 13:37:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.316 13:37:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:17:22.316 13:37:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.316 13:37:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:22.316 [2024-11-20 13:37:21.558307] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:17:22.316 [2024-11-20 13:37:21.558368] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:22.316 [2024-11-20 13:37:21.558387] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:17:22.316 [2024-11-20 13:37:21.558401] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:22.316 [2024-11-20 13:37:21.560713] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:22.316 [2024-11-20 13:37:21.560755] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:17:22.316 BaseBdev2 00:17:22.316 13:37:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.316 13:37:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:17:22.316 13:37:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:17:22.316 13:37:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.316 13:37:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:22.316 BaseBdev3_malloc 00:17:22.316 13:37:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.316 13:37:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:17:22.316 13:37:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.316 13:37:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:22.316 true 00:17:22.316 13:37:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.316 13:37:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:17:22.316 13:37:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.316 13:37:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:22.316 [2024-11-20 13:37:21.638875] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:17:22.316 [2024-11-20 13:37:21.639070] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:22.316 [2024-11-20 13:37:21.639165] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:17:22.316 [2024-11-20 13:37:21.639240] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:22.316 [2024-11-20 13:37:21.641758] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:22.316 [2024-11-20 13:37:21.641891] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:17:22.316 BaseBdev3 00:17:22.316 13:37:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.316 13:37:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:17:22.316 13:37:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:17:22.316 13:37:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.316 13:37:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:22.316 BaseBdev4_malloc 00:17:22.316 13:37:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.316 13:37:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:17:22.316 13:37:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.316 13:37:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:22.316 true 00:17:22.316 13:37:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.316 13:37:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:17:22.316 13:37:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.316 13:37:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:22.316 [2024-11-20 13:37:21.714811] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:17:22.316 [2024-11-20 13:37:21.714995] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:22.316 [2024-11-20 13:37:21.715069] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:17:22.316 [2024-11-20 13:37:21.715191] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:22.316 [2024-11-20 13:37:21.718046] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:22.316 [2024-11-20 13:37:21.718109] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:17:22.316 BaseBdev4 00:17:22.316 13:37:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.316 13:37:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:17:22.316 13:37:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.316 13:37:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:22.316 [2024-11-20 13:37:21.723110] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:22.316 [2024-11-20 13:37:21.726258] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:22.316 [2024-11-20 13:37:21.726532] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:22.316 [2024-11-20 13:37:21.726683] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:22.316 [2024-11-20 13:37:21.727234] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:17:22.316 [2024-11-20 13:37:21.727323] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:17:22.316 [2024-11-20 13:37:21.727797] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:17:22.316 [2024-11-20 13:37:21.728182] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:17:22.316 [2024-11-20 13:37:21.728322] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:17:22.316 [2024-11-20 13:37:21.728673] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:22.316 13:37:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.316 13:37:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:17:22.316 13:37:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:22.316 13:37:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:22.316 13:37:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:22.316 13:37:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:22.316 13:37:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:22.316 13:37:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:22.316 13:37:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:22.316 13:37:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:22.316 13:37:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:22.316 13:37:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:22.316 13:37:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.316 13:37:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:22.316 13:37:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:22.316 13:37:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.316 13:37:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:22.316 "name": "raid_bdev1", 00:17:22.316 "uuid": "09ddc76d-51d9-463b-9c47-feebe08c5295", 00:17:22.316 "strip_size_kb": 0, 00:17:22.316 "state": "online", 00:17:22.316 "raid_level": "raid1", 00:17:22.316 "superblock": true, 00:17:22.317 "num_base_bdevs": 4, 00:17:22.317 "num_base_bdevs_discovered": 4, 00:17:22.317 "num_base_bdevs_operational": 4, 00:17:22.317 "base_bdevs_list": [ 00:17:22.317 { 00:17:22.317 "name": "BaseBdev1", 00:17:22.317 "uuid": "ba7ea5c0-c526-5aa9-a08a-aa55d3ea7db7", 00:17:22.317 "is_configured": true, 00:17:22.317 "data_offset": 2048, 00:17:22.317 "data_size": 63488 00:17:22.317 }, 00:17:22.317 { 00:17:22.317 "name": "BaseBdev2", 00:17:22.317 "uuid": "e478256b-150d-569d-a427-94f63e571494", 00:17:22.317 "is_configured": true, 00:17:22.317 "data_offset": 2048, 00:17:22.317 "data_size": 63488 00:17:22.317 }, 00:17:22.317 { 00:17:22.317 "name": "BaseBdev3", 00:17:22.317 "uuid": "a3dade0a-d32a-5a9c-8f10-c62b7dc186c1", 00:17:22.317 "is_configured": true, 00:17:22.317 "data_offset": 2048, 00:17:22.317 "data_size": 63488 00:17:22.317 }, 00:17:22.317 { 00:17:22.317 "name": "BaseBdev4", 00:17:22.317 "uuid": "4ca5cdb0-cc83-5d3d-a871-0c4f4275e396", 00:17:22.317 "is_configured": true, 00:17:22.317 "data_offset": 2048, 00:17:22.317 "data_size": 63488 00:17:22.317 } 00:17:22.317 ] 00:17:22.317 }' 00:17:22.317 13:37:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:22.317 13:37:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:22.885 13:37:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:17:22.885 13:37:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:17:22.885 [2024-11-20 13:37:22.247783] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:17:23.823 13:37:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:17:23.823 13:37:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.823 13:37:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:23.823 13:37:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.823 13:37:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:17:23.823 13:37:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:17:23.823 13:37:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:17:23.823 13:37:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:17:23.823 13:37:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:17:23.823 13:37:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:23.823 13:37:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:23.823 13:37:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:23.823 13:37:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:23.823 13:37:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:23.823 13:37:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:23.823 13:37:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:23.823 13:37:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:23.823 13:37:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:23.823 13:37:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:23.823 13:37:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:23.823 13:37:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.823 13:37:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:23.823 13:37:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.823 13:37:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:23.823 "name": "raid_bdev1", 00:17:23.823 "uuid": "09ddc76d-51d9-463b-9c47-feebe08c5295", 00:17:23.823 "strip_size_kb": 0, 00:17:23.823 "state": "online", 00:17:23.823 "raid_level": "raid1", 00:17:23.823 "superblock": true, 00:17:23.823 "num_base_bdevs": 4, 00:17:23.823 "num_base_bdevs_discovered": 4, 00:17:23.823 "num_base_bdevs_operational": 4, 00:17:23.823 "base_bdevs_list": [ 00:17:23.823 { 00:17:23.823 "name": "BaseBdev1", 00:17:23.823 "uuid": "ba7ea5c0-c526-5aa9-a08a-aa55d3ea7db7", 00:17:23.823 "is_configured": true, 00:17:23.823 "data_offset": 2048, 00:17:23.823 "data_size": 63488 00:17:23.823 }, 00:17:23.823 { 00:17:23.823 "name": "BaseBdev2", 00:17:23.823 "uuid": "e478256b-150d-569d-a427-94f63e571494", 00:17:23.823 "is_configured": true, 00:17:23.823 "data_offset": 2048, 00:17:23.823 "data_size": 63488 00:17:23.823 }, 00:17:23.823 { 00:17:23.823 "name": "BaseBdev3", 00:17:23.823 "uuid": "a3dade0a-d32a-5a9c-8f10-c62b7dc186c1", 00:17:23.823 "is_configured": true, 00:17:23.823 "data_offset": 2048, 00:17:23.823 "data_size": 63488 00:17:23.823 }, 00:17:23.823 { 00:17:23.823 "name": "BaseBdev4", 00:17:23.823 "uuid": "4ca5cdb0-cc83-5d3d-a871-0c4f4275e396", 00:17:23.823 "is_configured": true, 00:17:23.823 "data_offset": 2048, 00:17:23.823 "data_size": 63488 00:17:23.823 } 00:17:23.823 ] 00:17:23.823 }' 00:17:23.823 13:37:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:23.823 13:37:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:24.392 13:37:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:24.392 13:37:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.392 13:37:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:24.392 [2024-11-20 13:37:23.579946] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:24.392 [2024-11-20 13:37:23.579985] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:24.392 [2024-11-20 13:37:23.582964] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:24.392 [2024-11-20 13:37:23.583161] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:24.392 [2024-11-20 13:37:23.583331] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:24.392 [2024-11-20 13:37:23.583525] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:17:24.392 { 00:17:24.392 "results": [ 00:17:24.392 { 00:17:24.392 "job": "raid_bdev1", 00:17:24.392 "core_mask": "0x1", 00:17:24.392 "workload": "randrw", 00:17:24.392 "percentage": 50, 00:17:24.392 "status": "finished", 00:17:24.392 "queue_depth": 1, 00:17:24.392 "io_size": 131072, 00:17:24.392 "runtime": 1.332137, 00:17:24.392 "iops": 10893.774439115496, 00:17:24.392 "mibps": 1361.721804889437, 00:17:24.392 "io_failed": 0, 00:17:24.392 "io_timeout": 0, 00:17:24.392 "avg_latency_us": 89.0740912935092, 00:17:24.392 "min_latency_us": 24.571887550200803, 00:17:24.392 "max_latency_us": 1500.2216867469879 00:17:24.392 } 00:17:24.392 ], 00:17:24.392 "core_count": 1 00:17:24.392 } 00:17:24.392 13:37:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.392 13:37:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 74747 00:17:24.392 13:37:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 74747 ']' 00:17:24.392 13:37:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 74747 00:17:24.392 13:37:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:17:24.392 13:37:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:24.392 13:37:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74747 00:17:24.392 13:37:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:24.392 killing process with pid 74747 00:17:24.392 13:37:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:24.392 13:37:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74747' 00:17:24.392 13:37:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 74747 00:17:24.392 [2024-11-20 13:37:23.637470] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:24.392 13:37:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 74747 00:17:24.651 [2024-11-20 13:37:23.966596] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:26.028 13:37:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:17:26.028 13:37:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:17:26.028 13:37:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.D7zskOPKUc 00:17:26.028 13:37:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:17:26.028 13:37:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:17:26.028 13:37:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:26.028 13:37:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:17:26.028 13:37:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:17:26.028 00:17:26.028 real 0m4.696s 00:17:26.028 user 0m5.462s 00:17:26.028 sys 0m0.638s 00:17:26.028 13:37:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:26.028 13:37:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:26.028 ************************************ 00:17:26.028 END TEST raid_read_error_test 00:17:26.028 ************************************ 00:17:26.028 13:37:25 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 4 write 00:17:26.028 13:37:25 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:17:26.028 13:37:25 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:26.028 13:37:25 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:26.028 ************************************ 00:17:26.028 START TEST raid_write_error_test 00:17:26.028 ************************************ 00:17:26.028 13:37:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 4 write 00:17:26.028 13:37:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:17:26.028 13:37:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:17:26.028 13:37:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:17:26.028 13:37:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:17:26.028 13:37:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:17:26.028 13:37:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:17:26.028 13:37:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:17:26.028 13:37:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:17:26.028 13:37:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:17:26.028 13:37:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:17:26.028 13:37:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:17:26.028 13:37:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:17:26.028 13:37:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:17:26.028 13:37:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:17:26.028 13:37:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:17:26.028 13:37:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:17:26.028 13:37:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:17:26.028 13:37:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:17:26.028 13:37:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:17:26.028 13:37:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:17:26.028 13:37:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:17:26.028 13:37:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:17:26.028 13:37:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:17:26.028 13:37:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:17:26.028 13:37:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:17:26.028 13:37:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:17:26.028 13:37:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:17:26.028 13:37:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.gXjc81Gkzj 00:17:26.028 13:37:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=74893 00:17:26.029 13:37:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:17:26.029 13:37:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 74893 00:17:26.029 13:37:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 74893 ']' 00:17:26.029 13:37:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:26.029 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:26.029 13:37:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:26.029 13:37:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:26.029 13:37:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:26.029 13:37:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:26.029 [2024-11-20 13:37:25.354395] Starting SPDK v25.01-pre git sha1 82b85d9ca / DPDK 24.03.0 initialization... 00:17:26.029 [2024-11-20 13:37:25.354722] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74893 ] 00:17:26.286 [2024-11-20 13:37:25.533705] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:26.287 [2024-11-20 13:37:25.652130] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:26.545 [2024-11-20 13:37:25.860013] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:26.545 [2024-11-20 13:37:25.860084] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:26.804 13:37:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:26.804 13:37:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:17:26.804 13:37:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:17:26.805 13:37:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:17:26.805 13:37:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.805 13:37:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:26.805 BaseBdev1_malloc 00:17:26.805 13:37:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.805 13:37:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:17:26.805 13:37:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.805 13:37:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:26.805 true 00:17:26.805 13:37:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.805 13:37:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:17:26.805 13:37:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.805 13:37:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:26.805 [2024-11-20 13:37:26.249876] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:17:26.805 [2024-11-20 13:37:26.250054] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:26.805 [2024-11-20 13:37:26.250123] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:17:26.805 [2024-11-20 13:37:26.250140] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:26.805 [2024-11-20 13:37:26.252499] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:26.805 [2024-11-20 13:37:26.252542] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:26.805 BaseBdev1 00:17:26.805 13:37:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.805 13:37:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:17:26.805 13:37:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:17:26.805 13:37:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.805 13:37:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:27.064 BaseBdev2_malloc 00:17:27.064 13:37:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.064 13:37:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:17:27.064 13:37:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.064 13:37:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:27.064 true 00:17:27.064 13:37:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.064 13:37:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:17:27.064 13:37:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.064 13:37:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:27.064 [2024-11-20 13:37:26.321315] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:17:27.064 [2024-11-20 13:37:26.321481] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:27.064 [2024-11-20 13:37:26.321532] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:17:27.064 [2024-11-20 13:37:26.321673] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:27.064 [2024-11-20 13:37:26.324006] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:27.064 [2024-11-20 13:37:26.324160] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:17:27.064 BaseBdev2 00:17:27.065 13:37:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.065 13:37:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:17:27.065 13:37:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:17:27.065 13:37:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.065 13:37:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:27.065 BaseBdev3_malloc 00:17:27.065 13:37:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.065 13:37:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:17:27.065 13:37:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.065 13:37:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:27.065 true 00:17:27.065 13:37:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.065 13:37:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:17:27.065 13:37:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.065 13:37:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:27.065 [2024-11-20 13:37:26.398161] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:17:27.065 [2024-11-20 13:37:26.398338] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:27.065 [2024-11-20 13:37:26.398393] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:17:27.065 [2024-11-20 13:37:26.398474] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:27.065 [2024-11-20 13:37:26.400868] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:27.065 [2024-11-20 13:37:26.401010] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:17:27.065 BaseBdev3 00:17:27.065 13:37:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.065 13:37:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:17:27.065 13:37:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:17:27.065 13:37:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.065 13:37:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:27.065 BaseBdev4_malloc 00:17:27.065 13:37:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.065 13:37:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:17:27.065 13:37:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.065 13:37:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:27.065 true 00:17:27.065 13:37:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.065 13:37:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:17:27.065 13:37:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.065 13:37:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:27.065 [2024-11-20 13:37:26.457458] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:17:27.065 [2024-11-20 13:37:26.457623] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:27.065 [2024-11-20 13:37:26.457677] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:17:27.065 [2024-11-20 13:37:26.457754] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:27.065 [2024-11-20 13:37:26.460551] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:27.065 [2024-11-20 13:37:26.460701] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:17:27.065 BaseBdev4 00:17:27.065 13:37:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.065 13:37:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:17:27.065 13:37:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.065 13:37:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:27.065 [2024-11-20 13:37:26.465564] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:27.065 [2024-11-20 13:37:26.467830] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:27.065 [2024-11-20 13:37:26.468050] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:27.065 [2024-11-20 13:37:26.468194] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:27.065 [2024-11-20 13:37:26.468558] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:17:27.065 [2024-11-20 13:37:26.468581] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:17:27.065 [2024-11-20 13:37:26.468851] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:17:27.065 [2024-11-20 13:37:26.469015] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:17:27.065 [2024-11-20 13:37:26.469026] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:17:27.065 [2024-11-20 13:37:26.469202] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:27.065 13:37:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.065 13:37:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:17:27.065 13:37:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:27.065 13:37:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:27.065 13:37:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:27.065 13:37:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:27.065 13:37:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:27.065 13:37:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:27.065 13:37:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:27.065 13:37:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:27.065 13:37:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:27.065 13:37:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:27.065 13:37:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.065 13:37:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:27.065 13:37:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:27.065 13:37:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.065 13:37:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:27.065 "name": "raid_bdev1", 00:17:27.065 "uuid": "702de323-0e37-4e01-b88a-c7874caff4d1", 00:17:27.065 "strip_size_kb": 0, 00:17:27.065 "state": "online", 00:17:27.065 "raid_level": "raid1", 00:17:27.065 "superblock": true, 00:17:27.065 "num_base_bdevs": 4, 00:17:27.065 "num_base_bdevs_discovered": 4, 00:17:27.065 "num_base_bdevs_operational": 4, 00:17:27.065 "base_bdevs_list": [ 00:17:27.065 { 00:17:27.065 "name": "BaseBdev1", 00:17:27.065 "uuid": "d60a7f7b-c22d-5fbb-8989-e568472abd07", 00:17:27.065 "is_configured": true, 00:17:27.065 "data_offset": 2048, 00:17:27.065 "data_size": 63488 00:17:27.065 }, 00:17:27.065 { 00:17:27.065 "name": "BaseBdev2", 00:17:27.065 "uuid": "e08c217d-41b9-586c-a569-478df94df3c8", 00:17:27.065 "is_configured": true, 00:17:27.065 "data_offset": 2048, 00:17:27.065 "data_size": 63488 00:17:27.065 }, 00:17:27.065 { 00:17:27.065 "name": "BaseBdev3", 00:17:27.065 "uuid": "e718a79c-5529-57d8-a7bd-9d06c298362f", 00:17:27.065 "is_configured": true, 00:17:27.065 "data_offset": 2048, 00:17:27.065 "data_size": 63488 00:17:27.065 }, 00:17:27.065 { 00:17:27.065 "name": "BaseBdev4", 00:17:27.065 "uuid": "c7824f77-08b2-504f-9527-8670606d642c", 00:17:27.065 "is_configured": true, 00:17:27.065 "data_offset": 2048, 00:17:27.065 "data_size": 63488 00:17:27.065 } 00:17:27.065 ] 00:17:27.065 }' 00:17:27.065 13:37:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:27.065 13:37:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:27.637 13:37:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:17:27.637 13:37:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:17:27.637 [2024-11-20 13:37:26.982438] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:17:28.575 13:37:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:17:28.575 13:37:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.575 13:37:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:28.575 [2024-11-20 13:37:27.871744] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:17:28.575 [2024-11-20 13:37:27.871808] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:28.575 [2024-11-20 13:37:27.872034] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006a40 00:17:28.575 13:37:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.575 13:37:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:17:28.575 13:37:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:17:28.575 13:37:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:17:28.575 13:37:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:17:28.575 13:37:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:17:28.575 13:37:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:28.575 13:37:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:28.575 13:37:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:28.575 13:37:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:28.575 13:37:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:28.575 13:37:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:28.575 13:37:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:28.575 13:37:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:28.575 13:37:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:28.575 13:37:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:28.575 13:37:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.575 13:37:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:28.575 13:37:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:28.575 13:37:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.575 13:37:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:28.575 "name": "raid_bdev1", 00:17:28.575 "uuid": "702de323-0e37-4e01-b88a-c7874caff4d1", 00:17:28.575 "strip_size_kb": 0, 00:17:28.575 "state": "online", 00:17:28.575 "raid_level": "raid1", 00:17:28.575 "superblock": true, 00:17:28.575 "num_base_bdevs": 4, 00:17:28.575 "num_base_bdevs_discovered": 3, 00:17:28.575 "num_base_bdevs_operational": 3, 00:17:28.575 "base_bdevs_list": [ 00:17:28.575 { 00:17:28.575 "name": null, 00:17:28.575 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:28.575 "is_configured": false, 00:17:28.575 "data_offset": 0, 00:17:28.575 "data_size": 63488 00:17:28.575 }, 00:17:28.575 { 00:17:28.575 "name": "BaseBdev2", 00:17:28.575 "uuid": "e08c217d-41b9-586c-a569-478df94df3c8", 00:17:28.575 "is_configured": true, 00:17:28.575 "data_offset": 2048, 00:17:28.575 "data_size": 63488 00:17:28.575 }, 00:17:28.575 { 00:17:28.575 "name": "BaseBdev3", 00:17:28.575 "uuid": "e718a79c-5529-57d8-a7bd-9d06c298362f", 00:17:28.575 "is_configured": true, 00:17:28.575 "data_offset": 2048, 00:17:28.575 "data_size": 63488 00:17:28.575 }, 00:17:28.575 { 00:17:28.575 "name": "BaseBdev4", 00:17:28.575 "uuid": "c7824f77-08b2-504f-9527-8670606d642c", 00:17:28.575 "is_configured": true, 00:17:28.575 "data_offset": 2048, 00:17:28.575 "data_size": 63488 00:17:28.575 } 00:17:28.575 ] 00:17:28.575 }' 00:17:28.575 13:37:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:28.575 13:37:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:28.835 13:37:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:28.836 13:37:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.836 13:37:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:28.836 [2024-11-20 13:37:28.238927] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:28.836 [2024-11-20 13:37:28.238965] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:28.836 [2024-11-20 13:37:28.241680] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:28.836 [2024-11-20 13:37:28.241731] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:28.836 [2024-11-20 13:37:28.241834] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:28.836 [2024-11-20 13:37:28.241848] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:17:28.836 { 00:17:28.836 "results": [ 00:17:28.836 { 00:17:28.836 "job": "raid_bdev1", 00:17:28.836 "core_mask": "0x1", 00:17:28.836 "workload": "randrw", 00:17:28.836 "percentage": 50, 00:17:28.836 "status": "finished", 00:17:28.836 "queue_depth": 1, 00:17:28.836 "io_size": 131072, 00:17:28.836 "runtime": 1.256416, 00:17:28.836 "iops": 12140.087359600642, 00:17:28.836 "mibps": 1517.5109199500803, 00:17:28.836 "io_failed": 0, 00:17:28.836 "io_timeout": 0, 00:17:28.836 "avg_latency_us": 79.65915086294171, 00:17:28.836 "min_latency_us": 24.160642570281123, 00:17:28.836 "max_latency_us": 1447.5823293172691 00:17:28.836 } 00:17:28.836 ], 00:17:28.836 "core_count": 1 00:17:28.836 } 00:17:28.836 13:37:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.836 13:37:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 74893 00:17:28.836 13:37:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 74893 ']' 00:17:28.836 13:37:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 74893 00:17:28.836 13:37:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:17:28.836 13:37:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:28.836 13:37:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74893 00:17:28.836 killing process with pid 74893 00:17:28.836 13:37:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:28.836 13:37:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:28.836 13:37:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74893' 00:17:28.836 13:37:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 74893 00:17:28.836 [2024-11-20 13:37:28.282524] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:28.836 13:37:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 74893 00:17:29.403 [2024-11-20 13:37:28.613156] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:30.336 13:37:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:17:30.336 13:37:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.gXjc81Gkzj 00:17:30.336 13:37:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:17:30.336 ************************************ 00:17:30.336 END TEST raid_write_error_test 00:17:30.336 ************************************ 00:17:30.336 13:37:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:17:30.336 13:37:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:17:30.336 13:37:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:30.336 13:37:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:17:30.336 13:37:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:17:30.336 00:17:30.336 real 0m4.572s 00:17:30.336 user 0m5.293s 00:17:30.336 sys 0m0.597s 00:17:30.336 13:37:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:30.336 13:37:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:30.594 13:37:29 bdev_raid -- bdev/bdev_raid.sh@976 -- # '[' true = true ']' 00:17:30.594 13:37:29 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 00:17:30.594 13:37:29 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 2 false false true 00:17:30.594 13:37:29 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:17:30.594 13:37:29 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:30.594 13:37:29 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:30.594 ************************************ 00:17:30.594 START TEST raid_rebuild_test 00:17:30.594 ************************************ 00:17:30.594 13:37:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 false false true 00:17:30.594 13:37:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:17:30.594 13:37:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:17:30.594 13:37:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:17:30.594 13:37:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:17:30.594 13:37:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:17:30.594 13:37:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:17:30.594 13:37:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:30.594 13:37:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:17:30.594 13:37:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:30.594 13:37:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:30.594 13:37:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:17:30.594 13:37:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:30.594 13:37:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:30.594 13:37:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:17:30.594 13:37:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:17:30.594 13:37:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:17:30.594 13:37:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:17:30.594 13:37:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:17:30.594 13:37:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:17:30.594 13:37:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:17:30.594 13:37:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:17:30.594 13:37:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:17:30.595 13:37:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:17:30.595 13:37:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=75031 00:17:30.595 13:37:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 75031 00:17:30.595 13:37:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:17:30.595 13:37:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 75031 ']' 00:17:30.595 13:37:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:30.595 13:37:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:30.595 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:30.595 13:37:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:30.595 13:37:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:30.595 13:37:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:30.595 I/O size of 3145728 is greater than zero copy threshold (65536). 00:17:30.595 Zero copy mechanism will not be used. 00:17:30.595 [2024-11-20 13:37:29.993467] Starting SPDK v25.01-pre git sha1 82b85d9ca / DPDK 24.03.0 initialization... 00:17:30.595 [2024-11-20 13:37:29.993592] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75031 ] 00:17:30.853 [2024-11-20 13:37:30.178890] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:30.853 [2024-11-20 13:37:30.299263] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:31.112 [2024-11-20 13:37:30.512321] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:31.112 [2024-11-20 13:37:30.512379] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:31.371 13:37:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:31.371 13:37:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:17:31.371 13:37:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:31.371 13:37:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:17:31.371 13:37:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.371 13:37:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:31.631 BaseBdev1_malloc 00:17:31.631 13:37:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.631 13:37:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:31.631 13:37:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.631 13:37:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:31.631 [2024-11-20 13:37:30.872981] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:31.631 [2024-11-20 13:37:30.873070] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:31.631 [2024-11-20 13:37:30.873097] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:31.631 [2024-11-20 13:37:30.873115] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:31.631 [2024-11-20 13:37:30.875513] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:31.631 [2024-11-20 13:37:30.875572] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:31.631 BaseBdev1 00:17:31.631 13:37:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.631 13:37:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:31.631 13:37:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:17:31.631 13:37:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.631 13:37:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:31.631 BaseBdev2_malloc 00:17:31.631 13:37:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.631 13:37:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:17:31.631 13:37:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.631 13:37:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:31.631 [2024-11-20 13:37:30.926489] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:17:31.631 [2024-11-20 13:37:30.926571] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:31.631 [2024-11-20 13:37:30.926601] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:31.631 [2024-11-20 13:37:30.926619] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:31.631 [2024-11-20 13:37:30.929010] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:31.631 [2024-11-20 13:37:30.929071] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:17:31.631 BaseBdev2 00:17:31.631 13:37:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.631 13:37:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:17:31.631 13:37:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.631 13:37:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:31.631 spare_malloc 00:17:31.631 13:37:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.631 13:37:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:17:31.631 13:37:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.631 13:37:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:31.631 spare_delay 00:17:31.631 13:37:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.631 13:37:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:31.631 13:37:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.631 13:37:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:31.631 [2024-11-20 13:37:31.008585] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:31.631 [2024-11-20 13:37:31.008656] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:31.631 [2024-11-20 13:37:31.008681] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:17:31.631 [2024-11-20 13:37:31.008698] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:31.631 [2024-11-20 13:37:31.011089] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:31.631 [2024-11-20 13:37:31.011138] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:31.631 spare 00:17:31.631 13:37:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.631 13:37:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:17:31.631 13:37:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.631 13:37:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:31.631 [2024-11-20 13:37:31.020626] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:31.631 [2024-11-20 13:37:31.022686] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:31.631 [2024-11-20 13:37:31.022805] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:17:31.631 [2024-11-20 13:37:31.022827] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:17:31.631 [2024-11-20 13:37:31.023112] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:17:31.631 [2024-11-20 13:37:31.023283] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:17:31.631 [2024-11-20 13:37:31.023298] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:17:31.631 [2024-11-20 13:37:31.023468] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:31.631 13:37:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.631 13:37:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:31.631 13:37:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:31.631 13:37:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:31.631 13:37:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:31.631 13:37:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:31.631 13:37:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:31.631 13:37:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:31.631 13:37:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:31.631 13:37:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:31.631 13:37:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:31.631 13:37:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:31.631 13:37:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.631 13:37:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:31.631 13:37:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:31.631 13:37:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.632 13:37:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:31.632 "name": "raid_bdev1", 00:17:31.632 "uuid": "cc7e157e-ce2a-4bfc-8e19-da85dec2920e", 00:17:31.632 "strip_size_kb": 0, 00:17:31.632 "state": "online", 00:17:31.632 "raid_level": "raid1", 00:17:31.632 "superblock": false, 00:17:31.632 "num_base_bdevs": 2, 00:17:31.632 "num_base_bdevs_discovered": 2, 00:17:31.632 "num_base_bdevs_operational": 2, 00:17:31.632 "base_bdevs_list": [ 00:17:31.632 { 00:17:31.632 "name": "BaseBdev1", 00:17:31.632 "uuid": "d4d3d4a7-28c0-503c-af76-0ee11296600e", 00:17:31.632 "is_configured": true, 00:17:31.632 "data_offset": 0, 00:17:31.632 "data_size": 65536 00:17:31.632 }, 00:17:31.632 { 00:17:31.632 "name": "BaseBdev2", 00:17:31.632 "uuid": "bb4d0152-b4cf-57b3-ac1f-1d4f1bcf7509", 00:17:31.632 "is_configured": true, 00:17:31.632 "data_offset": 0, 00:17:31.632 "data_size": 65536 00:17:31.632 } 00:17:31.632 ] 00:17:31.632 }' 00:17:31.632 13:37:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:31.632 13:37:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:32.206 13:37:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:17:32.206 13:37:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:32.206 13:37:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.206 13:37:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:32.206 [2024-11-20 13:37:31.440628] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:32.206 13:37:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.206 13:37:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:17:32.206 13:37:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:32.206 13:37:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.206 13:37:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:32.206 13:37:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:17:32.206 13:37:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.206 13:37:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:17:32.206 13:37:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:17:32.206 13:37:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:17:32.206 13:37:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:17:32.206 13:37:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:17:32.206 13:37:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:32.206 13:37:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:17:32.206 13:37:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:32.206 13:37:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:17:32.206 13:37:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:32.206 13:37:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:17:32.206 13:37:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:32.206 13:37:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:32.206 13:37:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:17:32.465 [2024-11-20 13:37:31.720305] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:17:32.465 /dev/nbd0 00:17:32.465 13:37:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:32.465 13:37:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:32.465 13:37:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:17:32.465 13:37:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:17:32.465 13:37:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:32.465 13:37:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:32.465 13:37:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:17:32.465 13:37:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:17:32.465 13:37:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:32.465 13:37:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:32.465 13:37:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:32.465 1+0 records in 00:17:32.465 1+0 records out 00:17:32.465 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000453109 s, 9.0 MB/s 00:17:32.465 13:37:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:32.465 13:37:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:17:32.465 13:37:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:32.465 13:37:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:32.465 13:37:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:17:32.465 13:37:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:32.465 13:37:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:32.465 13:37:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:17:32.465 13:37:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:17:32.465 13:37:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:17:37.759 65536+0 records in 00:17:37.759 65536+0 records out 00:17:37.759 33554432 bytes (34 MB, 32 MiB) copied, 5.21995 s, 6.4 MB/s 00:17:37.759 13:37:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:17:37.759 13:37:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:37.759 13:37:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:17:37.759 13:37:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:37.759 13:37:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:17:37.759 13:37:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:37.759 13:37:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:37.759 [2024-11-20 13:37:37.225763] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:38.019 13:37:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:38.019 13:37:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:38.019 13:37:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:38.019 13:37:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:38.019 13:37:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:38.019 13:37:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:38.019 13:37:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:17:38.019 13:37:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:17:38.019 13:37:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:17:38.019 13:37:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.019 13:37:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:38.019 [2024-11-20 13:37:37.265817] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:38.019 13:37:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.019 13:37:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:38.019 13:37:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:38.019 13:37:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:38.019 13:37:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:38.019 13:37:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:38.019 13:37:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:38.019 13:37:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:38.019 13:37:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:38.019 13:37:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:38.019 13:37:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:38.019 13:37:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:38.019 13:37:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.019 13:37:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:38.019 13:37:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:38.019 13:37:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.019 13:37:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:38.019 "name": "raid_bdev1", 00:17:38.019 "uuid": "cc7e157e-ce2a-4bfc-8e19-da85dec2920e", 00:17:38.019 "strip_size_kb": 0, 00:17:38.019 "state": "online", 00:17:38.019 "raid_level": "raid1", 00:17:38.019 "superblock": false, 00:17:38.019 "num_base_bdevs": 2, 00:17:38.019 "num_base_bdevs_discovered": 1, 00:17:38.019 "num_base_bdevs_operational": 1, 00:17:38.019 "base_bdevs_list": [ 00:17:38.019 { 00:17:38.019 "name": null, 00:17:38.019 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:38.019 "is_configured": false, 00:17:38.019 "data_offset": 0, 00:17:38.019 "data_size": 65536 00:17:38.019 }, 00:17:38.019 { 00:17:38.019 "name": "BaseBdev2", 00:17:38.019 "uuid": "bb4d0152-b4cf-57b3-ac1f-1d4f1bcf7509", 00:17:38.019 "is_configured": true, 00:17:38.019 "data_offset": 0, 00:17:38.019 "data_size": 65536 00:17:38.019 } 00:17:38.019 ] 00:17:38.019 }' 00:17:38.019 13:37:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:38.019 13:37:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:38.278 13:37:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:38.279 13:37:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.279 13:37:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:38.279 [2024-11-20 13:37:37.709267] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:38.279 [2024-11-20 13:37:37.728597] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09bd0 00:17:38.279 13:37:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.279 13:37:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:17:38.279 [2024-11-20 13:37:37.731179] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:39.659 13:37:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:39.659 13:37:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:39.659 13:37:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:39.659 13:37:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:39.659 13:37:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:39.659 13:37:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:39.659 13:37:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.659 13:37:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:39.659 13:37:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:39.659 13:37:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.659 13:37:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:39.659 "name": "raid_bdev1", 00:17:39.659 "uuid": "cc7e157e-ce2a-4bfc-8e19-da85dec2920e", 00:17:39.659 "strip_size_kb": 0, 00:17:39.659 "state": "online", 00:17:39.659 "raid_level": "raid1", 00:17:39.659 "superblock": false, 00:17:39.659 "num_base_bdevs": 2, 00:17:39.659 "num_base_bdevs_discovered": 2, 00:17:39.659 "num_base_bdevs_operational": 2, 00:17:39.659 "process": { 00:17:39.659 "type": "rebuild", 00:17:39.659 "target": "spare", 00:17:39.659 "progress": { 00:17:39.659 "blocks": 20480, 00:17:39.659 "percent": 31 00:17:39.659 } 00:17:39.659 }, 00:17:39.659 "base_bdevs_list": [ 00:17:39.659 { 00:17:39.659 "name": "spare", 00:17:39.659 "uuid": "a7a2136b-d26c-5ed5-8b80-20f978270b24", 00:17:39.659 "is_configured": true, 00:17:39.659 "data_offset": 0, 00:17:39.659 "data_size": 65536 00:17:39.659 }, 00:17:39.659 { 00:17:39.659 "name": "BaseBdev2", 00:17:39.659 "uuid": "bb4d0152-b4cf-57b3-ac1f-1d4f1bcf7509", 00:17:39.659 "is_configured": true, 00:17:39.659 "data_offset": 0, 00:17:39.659 "data_size": 65536 00:17:39.659 } 00:17:39.659 ] 00:17:39.659 }' 00:17:39.659 13:37:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:39.659 13:37:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:39.659 13:37:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:39.659 13:37:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:39.659 13:37:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:39.659 13:37:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.659 13:37:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:39.659 [2024-11-20 13:37:38.898675] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:39.659 [2024-11-20 13:37:38.937652] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:39.659 [2024-11-20 13:37:38.937756] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:39.659 [2024-11-20 13:37:38.937777] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:39.659 [2024-11-20 13:37:38.937794] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:39.659 13:37:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.659 13:37:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:39.659 13:37:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:39.659 13:37:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:39.659 13:37:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:39.659 13:37:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:39.659 13:37:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:39.659 13:37:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:39.659 13:37:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:39.659 13:37:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:39.659 13:37:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:39.659 13:37:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:39.659 13:37:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:39.659 13:37:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.659 13:37:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:39.659 13:37:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.659 13:37:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:39.659 "name": "raid_bdev1", 00:17:39.659 "uuid": "cc7e157e-ce2a-4bfc-8e19-da85dec2920e", 00:17:39.659 "strip_size_kb": 0, 00:17:39.659 "state": "online", 00:17:39.659 "raid_level": "raid1", 00:17:39.659 "superblock": false, 00:17:39.659 "num_base_bdevs": 2, 00:17:39.659 "num_base_bdevs_discovered": 1, 00:17:39.659 "num_base_bdevs_operational": 1, 00:17:39.659 "base_bdevs_list": [ 00:17:39.659 { 00:17:39.659 "name": null, 00:17:39.659 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:39.659 "is_configured": false, 00:17:39.659 "data_offset": 0, 00:17:39.659 "data_size": 65536 00:17:39.659 }, 00:17:39.659 { 00:17:39.659 "name": "BaseBdev2", 00:17:39.659 "uuid": "bb4d0152-b4cf-57b3-ac1f-1d4f1bcf7509", 00:17:39.659 "is_configured": true, 00:17:39.659 "data_offset": 0, 00:17:39.659 "data_size": 65536 00:17:39.659 } 00:17:39.659 ] 00:17:39.659 }' 00:17:39.659 13:37:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:39.659 13:37:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:40.226 13:37:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:40.226 13:37:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:40.226 13:37:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:40.226 13:37:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:40.226 13:37:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:40.227 13:37:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:40.227 13:37:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:40.227 13:37:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.227 13:37:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:40.227 13:37:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.227 13:37:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:40.227 "name": "raid_bdev1", 00:17:40.227 "uuid": "cc7e157e-ce2a-4bfc-8e19-da85dec2920e", 00:17:40.227 "strip_size_kb": 0, 00:17:40.227 "state": "online", 00:17:40.227 "raid_level": "raid1", 00:17:40.227 "superblock": false, 00:17:40.227 "num_base_bdevs": 2, 00:17:40.227 "num_base_bdevs_discovered": 1, 00:17:40.227 "num_base_bdevs_operational": 1, 00:17:40.227 "base_bdevs_list": [ 00:17:40.227 { 00:17:40.227 "name": null, 00:17:40.227 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:40.227 "is_configured": false, 00:17:40.227 "data_offset": 0, 00:17:40.227 "data_size": 65536 00:17:40.227 }, 00:17:40.227 { 00:17:40.227 "name": "BaseBdev2", 00:17:40.227 "uuid": "bb4d0152-b4cf-57b3-ac1f-1d4f1bcf7509", 00:17:40.227 "is_configured": true, 00:17:40.227 "data_offset": 0, 00:17:40.227 "data_size": 65536 00:17:40.227 } 00:17:40.227 ] 00:17:40.227 }' 00:17:40.227 13:37:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:40.227 13:37:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:40.227 13:37:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:40.227 13:37:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:40.227 13:37:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:40.227 13:37:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.227 13:37:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:40.227 [2024-11-20 13:37:39.608764] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:40.227 [2024-11-20 13:37:39.626661] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09ca0 00:17:40.227 13:37:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.227 13:37:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:17:40.227 [2024-11-20 13:37:39.628982] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:41.160 13:37:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:41.160 13:37:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:41.160 13:37:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:41.160 13:37:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:41.160 13:37:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:41.160 13:37:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:41.160 13:37:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:41.160 13:37:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.160 13:37:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:41.475 13:37:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.475 13:37:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:41.475 "name": "raid_bdev1", 00:17:41.476 "uuid": "cc7e157e-ce2a-4bfc-8e19-da85dec2920e", 00:17:41.476 "strip_size_kb": 0, 00:17:41.476 "state": "online", 00:17:41.476 "raid_level": "raid1", 00:17:41.476 "superblock": false, 00:17:41.476 "num_base_bdevs": 2, 00:17:41.476 "num_base_bdevs_discovered": 2, 00:17:41.476 "num_base_bdevs_operational": 2, 00:17:41.476 "process": { 00:17:41.476 "type": "rebuild", 00:17:41.476 "target": "spare", 00:17:41.476 "progress": { 00:17:41.476 "blocks": 20480, 00:17:41.476 "percent": 31 00:17:41.476 } 00:17:41.476 }, 00:17:41.476 "base_bdevs_list": [ 00:17:41.476 { 00:17:41.476 "name": "spare", 00:17:41.476 "uuid": "a7a2136b-d26c-5ed5-8b80-20f978270b24", 00:17:41.476 "is_configured": true, 00:17:41.476 "data_offset": 0, 00:17:41.476 "data_size": 65536 00:17:41.476 }, 00:17:41.476 { 00:17:41.476 "name": "BaseBdev2", 00:17:41.476 "uuid": "bb4d0152-b4cf-57b3-ac1f-1d4f1bcf7509", 00:17:41.476 "is_configured": true, 00:17:41.476 "data_offset": 0, 00:17:41.476 "data_size": 65536 00:17:41.476 } 00:17:41.476 ] 00:17:41.476 }' 00:17:41.476 13:37:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:41.476 13:37:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:41.476 13:37:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:41.476 13:37:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:41.476 13:37:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:17:41.476 13:37:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:17:41.476 13:37:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:17:41.476 13:37:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:17:41.476 13:37:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=368 00:17:41.476 13:37:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:41.476 13:37:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:41.476 13:37:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:41.476 13:37:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:41.476 13:37:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:41.476 13:37:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:41.476 13:37:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:41.476 13:37:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:41.476 13:37:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.476 13:37:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:41.476 13:37:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.476 13:37:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:41.476 "name": "raid_bdev1", 00:17:41.476 "uuid": "cc7e157e-ce2a-4bfc-8e19-da85dec2920e", 00:17:41.476 "strip_size_kb": 0, 00:17:41.476 "state": "online", 00:17:41.476 "raid_level": "raid1", 00:17:41.476 "superblock": false, 00:17:41.476 "num_base_bdevs": 2, 00:17:41.476 "num_base_bdevs_discovered": 2, 00:17:41.476 "num_base_bdevs_operational": 2, 00:17:41.476 "process": { 00:17:41.476 "type": "rebuild", 00:17:41.476 "target": "spare", 00:17:41.476 "progress": { 00:17:41.476 "blocks": 22528, 00:17:41.476 "percent": 34 00:17:41.476 } 00:17:41.476 }, 00:17:41.476 "base_bdevs_list": [ 00:17:41.476 { 00:17:41.476 "name": "spare", 00:17:41.476 "uuid": "a7a2136b-d26c-5ed5-8b80-20f978270b24", 00:17:41.476 "is_configured": true, 00:17:41.476 "data_offset": 0, 00:17:41.476 "data_size": 65536 00:17:41.476 }, 00:17:41.476 { 00:17:41.476 "name": "BaseBdev2", 00:17:41.476 "uuid": "bb4d0152-b4cf-57b3-ac1f-1d4f1bcf7509", 00:17:41.476 "is_configured": true, 00:17:41.476 "data_offset": 0, 00:17:41.476 "data_size": 65536 00:17:41.476 } 00:17:41.476 ] 00:17:41.476 }' 00:17:41.476 13:37:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:41.476 13:37:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:41.476 13:37:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:41.476 13:37:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:41.476 13:37:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:42.853 13:37:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:42.853 13:37:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:42.853 13:37:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:42.853 13:37:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:42.853 13:37:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:42.853 13:37:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:42.853 13:37:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:42.853 13:37:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:42.853 13:37:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.853 13:37:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:42.853 13:37:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.853 13:37:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:42.853 "name": "raid_bdev1", 00:17:42.853 "uuid": "cc7e157e-ce2a-4bfc-8e19-da85dec2920e", 00:17:42.853 "strip_size_kb": 0, 00:17:42.853 "state": "online", 00:17:42.853 "raid_level": "raid1", 00:17:42.853 "superblock": false, 00:17:42.853 "num_base_bdevs": 2, 00:17:42.853 "num_base_bdevs_discovered": 2, 00:17:42.853 "num_base_bdevs_operational": 2, 00:17:42.853 "process": { 00:17:42.853 "type": "rebuild", 00:17:42.853 "target": "spare", 00:17:42.853 "progress": { 00:17:42.853 "blocks": 45056, 00:17:42.853 "percent": 68 00:17:42.853 } 00:17:42.853 }, 00:17:42.853 "base_bdevs_list": [ 00:17:42.853 { 00:17:42.853 "name": "spare", 00:17:42.853 "uuid": "a7a2136b-d26c-5ed5-8b80-20f978270b24", 00:17:42.853 "is_configured": true, 00:17:42.853 "data_offset": 0, 00:17:42.853 "data_size": 65536 00:17:42.853 }, 00:17:42.853 { 00:17:42.853 "name": "BaseBdev2", 00:17:42.853 "uuid": "bb4d0152-b4cf-57b3-ac1f-1d4f1bcf7509", 00:17:42.853 "is_configured": true, 00:17:42.853 "data_offset": 0, 00:17:42.853 "data_size": 65536 00:17:42.853 } 00:17:42.853 ] 00:17:42.853 }' 00:17:42.853 13:37:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:42.853 13:37:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:42.853 13:37:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:42.853 13:37:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:42.853 13:37:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:43.421 [2024-11-20 13:37:42.844697] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:17:43.421 [2024-11-20 13:37:42.844796] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:17:43.421 [2024-11-20 13:37:42.844852] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:43.681 13:37:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:43.681 13:37:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:43.681 13:37:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:43.681 13:37:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:43.681 13:37:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:43.681 13:37:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:43.681 13:37:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:43.681 13:37:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:43.681 13:37:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.681 13:37:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:43.681 13:37:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.681 13:37:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:43.681 "name": "raid_bdev1", 00:17:43.681 "uuid": "cc7e157e-ce2a-4bfc-8e19-da85dec2920e", 00:17:43.681 "strip_size_kb": 0, 00:17:43.681 "state": "online", 00:17:43.681 "raid_level": "raid1", 00:17:43.681 "superblock": false, 00:17:43.681 "num_base_bdevs": 2, 00:17:43.681 "num_base_bdevs_discovered": 2, 00:17:43.681 "num_base_bdevs_operational": 2, 00:17:43.681 "base_bdevs_list": [ 00:17:43.681 { 00:17:43.681 "name": "spare", 00:17:43.681 "uuid": "a7a2136b-d26c-5ed5-8b80-20f978270b24", 00:17:43.681 "is_configured": true, 00:17:43.681 "data_offset": 0, 00:17:43.681 "data_size": 65536 00:17:43.681 }, 00:17:43.681 { 00:17:43.681 "name": "BaseBdev2", 00:17:43.681 "uuid": "bb4d0152-b4cf-57b3-ac1f-1d4f1bcf7509", 00:17:43.681 "is_configured": true, 00:17:43.681 "data_offset": 0, 00:17:43.681 "data_size": 65536 00:17:43.681 } 00:17:43.681 ] 00:17:43.681 }' 00:17:43.681 13:37:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:43.681 13:37:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:17:43.681 13:37:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:43.940 13:37:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:17:43.940 13:37:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:17:43.940 13:37:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:43.940 13:37:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:43.940 13:37:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:43.940 13:37:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:43.940 13:37:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:43.940 13:37:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:43.940 13:37:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:43.940 13:37:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.940 13:37:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:43.940 13:37:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.940 13:37:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:43.940 "name": "raid_bdev1", 00:17:43.940 "uuid": "cc7e157e-ce2a-4bfc-8e19-da85dec2920e", 00:17:43.940 "strip_size_kb": 0, 00:17:43.940 "state": "online", 00:17:43.940 "raid_level": "raid1", 00:17:43.940 "superblock": false, 00:17:43.940 "num_base_bdevs": 2, 00:17:43.940 "num_base_bdevs_discovered": 2, 00:17:43.940 "num_base_bdevs_operational": 2, 00:17:43.940 "base_bdevs_list": [ 00:17:43.940 { 00:17:43.940 "name": "spare", 00:17:43.940 "uuid": "a7a2136b-d26c-5ed5-8b80-20f978270b24", 00:17:43.940 "is_configured": true, 00:17:43.940 "data_offset": 0, 00:17:43.940 "data_size": 65536 00:17:43.940 }, 00:17:43.940 { 00:17:43.940 "name": "BaseBdev2", 00:17:43.940 "uuid": "bb4d0152-b4cf-57b3-ac1f-1d4f1bcf7509", 00:17:43.940 "is_configured": true, 00:17:43.940 "data_offset": 0, 00:17:43.940 "data_size": 65536 00:17:43.940 } 00:17:43.940 ] 00:17:43.940 }' 00:17:43.940 13:37:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:43.940 13:37:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:43.940 13:37:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:43.940 13:37:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:43.940 13:37:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:43.940 13:37:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:43.940 13:37:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:43.940 13:37:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:43.940 13:37:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:43.940 13:37:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:43.940 13:37:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:43.940 13:37:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:43.940 13:37:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:43.940 13:37:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:43.940 13:37:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:43.940 13:37:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:43.940 13:37:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.940 13:37:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:43.940 13:37:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.940 13:37:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:43.940 "name": "raid_bdev1", 00:17:43.940 "uuid": "cc7e157e-ce2a-4bfc-8e19-da85dec2920e", 00:17:43.940 "strip_size_kb": 0, 00:17:43.940 "state": "online", 00:17:43.940 "raid_level": "raid1", 00:17:43.940 "superblock": false, 00:17:43.940 "num_base_bdevs": 2, 00:17:43.940 "num_base_bdevs_discovered": 2, 00:17:43.940 "num_base_bdevs_operational": 2, 00:17:43.940 "base_bdevs_list": [ 00:17:43.940 { 00:17:43.940 "name": "spare", 00:17:43.940 "uuid": "a7a2136b-d26c-5ed5-8b80-20f978270b24", 00:17:43.940 "is_configured": true, 00:17:43.940 "data_offset": 0, 00:17:43.940 "data_size": 65536 00:17:43.940 }, 00:17:43.940 { 00:17:43.940 "name": "BaseBdev2", 00:17:43.940 "uuid": "bb4d0152-b4cf-57b3-ac1f-1d4f1bcf7509", 00:17:43.940 "is_configured": true, 00:17:43.940 "data_offset": 0, 00:17:43.940 "data_size": 65536 00:17:43.940 } 00:17:43.940 ] 00:17:43.940 }' 00:17:43.940 13:37:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:43.940 13:37:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:44.508 13:37:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:44.508 13:37:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.508 13:37:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:44.508 [2024-11-20 13:37:43.771211] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:44.508 [2024-11-20 13:37:43.771252] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:44.508 [2024-11-20 13:37:43.771355] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:44.508 [2024-11-20 13:37:43.771427] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:44.508 [2024-11-20 13:37:43.771440] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:17:44.508 13:37:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.508 13:37:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:44.508 13:37:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.508 13:37:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:44.508 13:37:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:17:44.508 13:37:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.508 13:37:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:17:44.508 13:37:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:17:44.508 13:37:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:17:44.508 13:37:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:17:44.508 13:37:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:44.508 13:37:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:17:44.508 13:37:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:44.508 13:37:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:44.508 13:37:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:44.508 13:37:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:17:44.508 13:37:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:44.508 13:37:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:44.508 13:37:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:17:44.767 /dev/nbd0 00:17:44.767 13:37:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:44.767 13:37:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:44.767 13:37:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:17:44.767 13:37:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:17:44.767 13:37:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:44.767 13:37:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:44.767 13:37:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:17:44.767 13:37:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:17:44.767 13:37:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:44.767 13:37:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:44.767 13:37:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:44.767 1+0 records in 00:17:44.767 1+0 records out 00:17:44.767 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000456007 s, 9.0 MB/s 00:17:44.767 13:37:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:44.767 13:37:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:17:44.767 13:37:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:44.767 13:37:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:44.767 13:37:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:17:44.767 13:37:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:44.767 13:37:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:44.767 13:37:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:17:45.027 /dev/nbd1 00:17:45.027 13:37:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:17:45.027 13:37:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:17:45.027 13:37:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:17:45.027 13:37:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:17:45.027 13:37:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:45.027 13:37:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:45.027 13:37:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:17:45.027 13:37:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:17:45.027 13:37:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:45.027 13:37:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:45.027 13:37:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:45.027 1+0 records in 00:17:45.027 1+0 records out 00:17:45.027 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000392288 s, 10.4 MB/s 00:17:45.027 13:37:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:45.027 13:37:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:17:45.027 13:37:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:45.027 13:37:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:45.027 13:37:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:17:45.027 13:37:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:45.027 13:37:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:45.027 13:37:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:17:45.285 13:37:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:17:45.286 13:37:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:45.286 13:37:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:45.286 13:37:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:45.286 13:37:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:17:45.286 13:37:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:45.286 13:37:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:45.544 13:37:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:45.544 13:37:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:45.544 13:37:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:45.544 13:37:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:45.544 13:37:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:45.544 13:37:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:45.544 13:37:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:17:45.544 13:37:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:17:45.544 13:37:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:45.544 13:37:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:17:45.805 13:37:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:17:45.805 13:37:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:17:45.805 13:37:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:17:45.805 13:37:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:45.805 13:37:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:45.805 13:37:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:17:45.805 13:37:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:17:45.805 13:37:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:17:45.805 13:37:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:17:45.805 13:37:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 75031 00:17:45.805 13:37:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 75031 ']' 00:17:45.805 13:37:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 75031 00:17:45.805 13:37:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:17:45.805 13:37:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:45.805 13:37:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75031 00:17:45.805 13:37:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:45.805 13:37:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:45.805 killing process with pid 75031 00:17:45.805 13:37:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75031' 00:17:45.805 Received shutdown signal, test time was about 60.000000 seconds 00:17:45.805 00:17:45.805 Latency(us) 00:17:45.805 [2024-11-20T13:37:45.290Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:45.805 [2024-11-20T13:37:45.290Z] =================================================================================================================== 00:17:45.805 [2024-11-20T13:37:45.290Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:45.805 13:37:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@973 -- # kill 75031 00:17:45.805 [2024-11-20 13:37:45.173475] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:45.805 13:37:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@978 -- # wait 75031 00:17:46.101 [2024-11-20 13:37:45.504092] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:47.481 ************************************ 00:17:47.481 END TEST raid_rebuild_test 00:17:47.481 ************************************ 00:17:47.481 13:37:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:17:47.481 00:17:47.481 real 0m16.850s 00:17:47.481 user 0m18.319s 00:17:47.481 sys 0m3.780s 00:17:47.481 13:37:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:47.481 13:37:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:47.481 13:37:46 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 2 true false true 00:17:47.481 13:37:46 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:17:47.481 13:37:46 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:47.481 13:37:46 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:47.481 ************************************ 00:17:47.481 START TEST raid_rebuild_test_sb 00:17:47.481 ************************************ 00:17:47.481 13:37:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false true 00:17:47.481 13:37:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:17:47.481 13:37:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:17:47.481 13:37:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:17:47.481 13:37:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:17:47.481 13:37:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:17:47.481 13:37:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:17:47.481 13:37:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:47.481 13:37:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:17:47.481 13:37:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:47.481 13:37:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:47.481 13:37:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:17:47.481 13:37:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:47.481 13:37:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:47.481 13:37:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:17:47.481 13:37:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:17:47.481 13:37:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:17:47.481 13:37:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:17:47.481 13:37:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:17:47.481 13:37:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:17:47.481 13:37:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:17:47.481 13:37:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:17:47.481 13:37:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:17:47.481 13:37:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:17:47.481 13:37:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:17:47.481 13:37:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=75480 00:17:47.481 13:37:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:17:47.481 13:37:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 75480 00:17:47.481 13:37:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 75480 ']' 00:17:47.481 13:37:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:47.481 13:37:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:47.481 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:47.481 13:37:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:47.481 13:37:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:47.481 13:37:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:47.481 I/O size of 3145728 is greater than zero copy threshold (65536). 00:17:47.481 Zero copy mechanism will not be used. 00:17:47.481 [2024-11-20 13:37:46.933412] Starting SPDK v25.01-pre git sha1 82b85d9ca / DPDK 24.03.0 initialization... 00:17:47.481 [2024-11-20 13:37:46.933565] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75480 ] 00:17:47.740 [2024-11-20 13:37:47.119470] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:48.000 [2024-11-20 13:37:47.248430] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:48.000 [2024-11-20 13:37:47.476478] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:48.000 [2024-11-20 13:37:47.476553] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:48.569 13:37:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:48.569 13:37:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:17:48.569 13:37:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:48.569 13:37:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:17:48.569 13:37:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.569 13:37:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:48.569 BaseBdev1_malloc 00:17:48.569 13:37:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.569 13:37:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:48.569 13:37:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.569 13:37:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:48.569 [2024-11-20 13:37:47.879552] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:48.569 [2024-11-20 13:37:47.879622] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:48.569 [2024-11-20 13:37:47.879648] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:48.569 [2024-11-20 13:37:47.879664] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:48.569 [2024-11-20 13:37:47.882271] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:48.569 [2024-11-20 13:37:47.882313] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:48.569 BaseBdev1 00:17:48.569 13:37:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.569 13:37:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:48.569 13:37:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:17:48.569 13:37:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.569 13:37:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:48.569 BaseBdev2_malloc 00:17:48.569 13:37:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.569 13:37:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:17:48.569 13:37:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.569 13:37:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:48.569 [2024-11-20 13:37:47.938709] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:17:48.569 [2024-11-20 13:37:47.938782] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:48.569 [2024-11-20 13:37:47.938811] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:48.569 [2024-11-20 13:37:47.938827] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:48.569 [2024-11-20 13:37:47.941308] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:48.569 [2024-11-20 13:37:47.941347] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:17:48.569 BaseBdev2 00:17:48.569 13:37:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.569 13:37:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:17:48.569 13:37:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.569 13:37:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:48.569 spare_malloc 00:17:48.569 13:37:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.569 13:37:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:17:48.569 13:37:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.569 13:37:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:48.569 spare_delay 00:17:48.569 13:37:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.569 13:37:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:48.569 13:37:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.569 13:37:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:48.569 [2024-11-20 13:37:48.018195] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:48.569 [2024-11-20 13:37:48.018269] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:48.569 [2024-11-20 13:37:48.018294] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:17:48.569 [2024-11-20 13:37:48.018309] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:48.569 [2024-11-20 13:37:48.020808] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:48.569 [2024-11-20 13:37:48.020850] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:48.569 spare 00:17:48.569 13:37:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.569 13:37:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:17:48.569 13:37:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.569 13:37:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:48.569 [2024-11-20 13:37:48.030248] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:48.569 [2024-11-20 13:37:48.032456] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:48.569 [2024-11-20 13:37:48.032635] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:17:48.569 [2024-11-20 13:37:48.032653] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:17:48.569 [2024-11-20 13:37:48.032920] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:17:48.569 [2024-11-20 13:37:48.033122] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:17:48.569 [2024-11-20 13:37:48.033142] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:17:48.569 [2024-11-20 13:37:48.033326] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:48.569 13:37:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.569 13:37:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:48.569 13:37:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:48.569 13:37:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:48.569 13:37:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:48.569 13:37:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:48.569 13:37:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:48.569 13:37:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:48.569 13:37:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:48.569 13:37:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:48.569 13:37:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:48.569 13:37:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:48.569 13:37:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.569 13:37:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:48.569 13:37:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:48.828 13:37:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.828 13:37:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:48.828 "name": "raid_bdev1", 00:17:48.828 "uuid": "14364b34-214e-4b46-b586-c1b84490f982", 00:17:48.828 "strip_size_kb": 0, 00:17:48.828 "state": "online", 00:17:48.828 "raid_level": "raid1", 00:17:48.828 "superblock": true, 00:17:48.828 "num_base_bdevs": 2, 00:17:48.828 "num_base_bdevs_discovered": 2, 00:17:48.828 "num_base_bdevs_operational": 2, 00:17:48.828 "base_bdevs_list": [ 00:17:48.828 { 00:17:48.828 "name": "BaseBdev1", 00:17:48.828 "uuid": "050a7b3b-4a34-5aa4-af66-2cffdcfcd5bb", 00:17:48.828 "is_configured": true, 00:17:48.828 "data_offset": 2048, 00:17:48.828 "data_size": 63488 00:17:48.828 }, 00:17:48.828 { 00:17:48.828 "name": "BaseBdev2", 00:17:48.828 "uuid": "a765760d-e9a1-54fc-9c69-f5ef177e0336", 00:17:48.828 "is_configured": true, 00:17:48.828 "data_offset": 2048, 00:17:48.828 "data_size": 63488 00:17:48.828 } 00:17:48.828 ] 00:17:48.828 }' 00:17:48.828 13:37:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:48.828 13:37:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:49.088 13:37:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:49.088 13:37:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:17:49.088 13:37:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.088 13:37:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:49.088 [2024-11-20 13:37:48.465901] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:49.088 13:37:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.088 13:37:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:17:49.088 13:37:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:49.088 13:37:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.088 13:37:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:49.088 13:37:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:17:49.088 13:37:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.088 13:37:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:17:49.088 13:37:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:17:49.088 13:37:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:17:49.088 13:37:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:17:49.088 13:37:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:17:49.088 13:37:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:49.088 13:37:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:17:49.088 13:37:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:49.088 13:37:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:17:49.088 13:37:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:49.088 13:37:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:17:49.088 13:37:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:49.088 13:37:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:49.088 13:37:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:17:49.655 [2024-11-20 13:37:48.841180] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:17:49.655 /dev/nbd0 00:17:49.655 13:37:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:49.655 13:37:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:49.655 13:37:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:17:49.655 13:37:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:17:49.655 13:37:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:49.655 13:37:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:49.655 13:37:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:17:49.655 13:37:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:17:49.655 13:37:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:49.655 13:37:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:49.655 13:37:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:49.655 1+0 records in 00:17:49.655 1+0 records out 00:17:49.655 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000355491 s, 11.5 MB/s 00:17:49.655 13:37:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:49.655 13:37:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:17:49.655 13:37:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:49.655 13:37:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:49.655 13:37:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:17:49.655 13:37:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:49.655 13:37:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:49.655 13:37:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:17:49.655 13:37:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:17:49.655 13:37:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:17:54.954 63488+0 records in 00:17:54.954 63488+0 records out 00:17:54.954 32505856 bytes (33 MB, 31 MiB) copied, 5.32095 s, 6.1 MB/s 00:17:54.954 13:37:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:17:54.954 13:37:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:54.954 13:37:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:17:54.954 13:37:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:54.954 13:37:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:17:54.954 13:37:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:54.954 13:37:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:55.221 [2024-11-20 13:37:54.449627] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:55.221 13:37:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:55.221 13:37:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:55.221 13:37:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:55.221 13:37:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:55.221 13:37:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:55.221 13:37:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:55.221 13:37:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:17:55.221 13:37:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:17:55.221 13:37:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:17:55.221 13:37:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.222 13:37:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:55.222 [2024-11-20 13:37:54.485674] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:55.222 13:37:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.222 13:37:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:55.222 13:37:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:55.222 13:37:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:55.222 13:37:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:55.222 13:37:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:55.222 13:37:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:55.222 13:37:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:55.222 13:37:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:55.222 13:37:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:55.222 13:37:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:55.222 13:37:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:55.222 13:37:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.222 13:37:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:55.222 13:37:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:55.222 13:37:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.222 13:37:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:55.222 "name": "raid_bdev1", 00:17:55.222 "uuid": "14364b34-214e-4b46-b586-c1b84490f982", 00:17:55.222 "strip_size_kb": 0, 00:17:55.222 "state": "online", 00:17:55.222 "raid_level": "raid1", 00:17:55.222 "superblock": true, 00:17:55.222 "num_base_bdevs": 2, 00:17:55.222 "num_base_bdevs_discovered": 1, 00:17:55.222 "num_base_bdevs_operational": 1, 00:17:55.222 "base_bdevs_list": [ 00:17:55.222 { 00:17:55.222 "name": null, 00:17:55.222 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:55.222 "is_configured": false, 00:17:55.222 "data_offset": 0, 00:17:55.222 "data_size": 63488 00:17:55.222 }, 00:17:55.222 { 00:17:55.222 "name": "BaseBdev2", 00:17:55.222 "uuid": "a765760d-e9a1-54fc-9c69-f5ef177e0336", 00:17:55.222 "is_configured": true, 00:17:55.222 "data_offset": 2048, 00:17:55.222 "data_size": 63488 00:17:55.222 } 00:17:55.222 ] 00:17:55.222 }' 00:17:55.222 13:37:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:55.222 13:37:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:55.481 13:37:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:55.481 13:37:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.481 13:37:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:55.481 [2024-11-20 13:37:54.901171] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:55.481 [2024-11-20 13:37:54.918526] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3360 00:17:55.481 13:37:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.481 13:37:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:17:55.481 [2024-11-20 13:37:54.920864] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:56.460 13:37:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:56.460 13:37:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:56.460 13:37:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:56.460 13:37:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:56.460 13:37:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:56.460 13:37:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:56.460 13:37:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:56.460 13:37:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.460 13:37:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:56.720 13:37:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.720 13:37:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:56.720 "name": "raid_bdev1", 00:17:56.720 "uuid": "14364b34-214e-4b46-b586-c1b84490f982", 00:17:56.720 "strip_size_kb": 0, 00:17:56.720 "state": "online", 00:17:56.720 "raid_level": "raid1", 00:17:56.720 "superblock": true, 00:17:56.720 "num_base_bdevs": 2, 00:17:56.720 "num_base_bdevs_discovered": 2, 00:17:56.720 "num_base_bdevs_operational": 2, 00:17:56.720 "process": { 00:17:56.720 "type": "rebuild", 00:17:56.720 "target": "spare", 00:17:56.720 "progress": { 00:17:56.720 "blocks": 20480, 00:17:56.720 "percent": 32 00:17:56.720 } 00:17:56.720 }, 00:17:56.720 "base_bdevs_list": [ 00:17:56.720 { 00:17:56.720 "name": "spare", 00:17:56.720 "uuid": "e6463535-ec37-5d41-ab2f-af4fe0609f90", 00:17:56.720 "is_configured": true, 00:17:56.720 "data_offset": 2048, 00:17:56.720 "data_size": 63488 00:17:56.720 }, 00:17:56.720 { 00:17:56.720 "name": "BaseBdev2", 00:17:56.720 "uuid": "a765760d-e9a1-54fc-9c69-f5ef177e0336", 00:17:56.720 "is_configured": true, 00:17:56.720 "data_offset": 2048, 00:17:56.720 "data_size": 63488 00:17:56.720 } 00:17:56.720 ] 00:17:56.720 }' 00:17:56.720 13:37:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:56.720 13:37:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:56.720 13:37:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:56.720 13:37:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:56.720 13:37:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:56.720 13:37:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.720 13:37:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:56.720 [2024-11-20 13:37:56.052104] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:56.720 [2024-11-20 13:37:56.126076] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:56.720 [2024-11-20 13:37:56.126152] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:56.720 [2024-11-20 13:37:56.126168] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:56.720 [2024-11-20 13:37:56.126183] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:56.720 13:37:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.720 13:37:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:56.720 13:37:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:56.720 13:37:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:56.720 13:37:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:56.720 13:37:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:56.720 13:37:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:56.720 13:37:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:56.720 13:37:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:56.720 13:37:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:56.720 13:37:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:56.720 13:37:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:56.720 13:37:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:56.720 13:37:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.720 13:37:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:56.720 13:37:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.980 13:37:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:56.980 "name": "raid_bdev1", 00:17:56.980 "uuid": "14364b34-214e-4b46-b586-c1b84490f982", 00:17:56.980 "strip_size_kb": 0, 00:17:56.980 "state": "online", 00:17:56.980 "raid_level": "raid1", 00:17:56.980 "superblock": true, 00:17:56.980 "num_base_bdevs": 2, 00:17:56.980 "num_base_bdevs_discovered": 1, 00:17:56.980 "num_base_bdevs_operational": 1, 00:17:56.980 "base_bdevs_list": [ 00:17:56.980 { 00:17:56.980 "name": null, 00:17:56.980 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:56.980 "is_configured": false, 00:17:56.980 "data_offset": 0, 00:17:56.980 "data_size": 63488 00:17:56.980 }, 00:17:56.980 { 00:17:56.980 "name": "BaseBdev2", 00:17:56.980 "uuid": "a765760d-e9a1-54fc-9c69-f5ef177e0336", 00:17:56.980 "is_configured": true, 00:17:56.980 "data_offset": 2048, 00:17:56.980 "data_size": 63488 00:17:56.980 } 00:17:56.980 ] 00:17:56.980 }' 00:17:56.980 13:37:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:56.980 13:37:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:57.240 13:37:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:57.240 13:37:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:57.240 13:37:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:57.240 13:37:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:57.240 13:37:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:57.240 13:37:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:57.240 13:37:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:57.240 13:37:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.240 13:37:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:57.240 13:37:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.240 13:37:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:57.240 "name": "raid_bdev1", 00:17:57.240 "uuid": "14364b34-214e-4b46-b586-c1b84490f982", 00:17:57.240 "strip_size_kb": 0, 00:17:57.240 "state": "online", 00:17:57.240 "raid_level": "raid1", 00:17:57.240 "superblock": true, 00:17:57.240 "num_base_bdevs": 2, 00:17:57.240 "num_base_bdevs_discovered": 1, 00:17:57.240 "num_base_bdevs_operational": 1, 00:17:57.240 "base_bdevs_list": [ 00:17:57.240 { 00:17:57.240 "name": null, 00:17:57.240 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:57.240 "is_configured": false, 00:17:57.240 "data_offset": 0, 00:17:57.240 "data_size": 63488 00:17:57.240 }, 00:17:57.240 { 00:17:57.240 "name": "BaseBdev2", 00:17:57.240 "uuid": "a765760d-e9a1-54fc-9c69-f5ef177e0336", 00:17:57.240 "is_configured": true, 00:17:57.240 "data_offset": 2048, 00:17:57.240 "data_size": 63488 00:17:57.240 } 00:17:57.240 ] 00:17:57.240 }' 00:17:57.240 13:37:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:57.240 13:37:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:57.240 13:37:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:57.240 13:37:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:57.240 13:37:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:57.240 13:37:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.240 13:37:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:57.240 [2024-11-20 13:37:56.707037] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:57.240 [2024-11-20 13:37:56.723126] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3430 00:17:57.500 13:37:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.500 13:37:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:17:57.500 [2024-11-20 13:37:56.725241] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:58.437 13:37:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:58.437 13:37:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:58.437 13:37:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:58.437 13:37:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:58.437 13:37:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:58.437 13:37:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:58.437 13:37:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:58.437 13:37:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.437 13:37:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:58.437 13:37:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.437 13:37:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:58.437 "name": "raid_bdev1", 00:17:58.437 "uuid": "14364b34-214e-4b46-b586-c1b84490f982", 00:17:58.437 "strip_size_kb": 0, 00:17:58.437 "state": "online", 00:17:58.437 "raid_level": "raid1", 00:17:58.437 "superblock": true, 00:17:58.437 "num_base_bdevs": 2, 00:17:58.437 "num_base_bdevs_discovered": 2, 00:17:58.437 "num_base_bdevs_operational": 2, 00:17:58.437 "process": { 00:17:58.437 "type": "rebuild", 00:17:58.437 "target": "spare", 00:17:58.437 "progress": { 00:17:58.437 "blocks": 20480, 00:17:58.437 "percent": 32 00:17:58.437 } 00:17:58.437 }, 00:17:58.437 "base_bdevs_list": [ 00:17:58.437 { 00:17:58.437 "name": "spare", 00:17:58.437 "uuid": "e6463535-ec37-5d41-ab2f-af4fe0609f90", 00:17:58.437 "is_configured": true, 00:17:58.437 "data_offset": 2048, 00:17:58.437 "data_size": 63488 00:17:58.437 }, 00:17:58.437 { 00:17:58.437 "name": "BaseBdev2", 00:17:58.437 "uuid": "a765760d-e9a1-54fc-9c69-f5ef177e0336", 00:17:58.437 "is_configured": true, 00:17:58.437 "data_offset": 2048, 00:17:58.437 "data_size": 63488 00:17:58.437 } 00:17:58.437 ] 00:17:58.437 }' 00:17:58.437 13:37:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:58.437 13:37:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:58.437 13:37:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:58.437 13:37:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:58.437 13:37:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:17:58.437 13:37:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:17:58.437 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:17:58.437 13:37:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:17:58.437 13:37:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:17:58.437 13:37:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:17:58.437 13:37:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=385 00:17:58.438 13:37:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:58.438 13:37:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:58.438 13:37:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:58.438 13:37:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:58.438 13:37:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:58.438 13:37:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:58.438 13:37:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:58.438 13:37:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.438 13:37:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:58.438 13:37:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:58.438 13:37:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.438 13:37:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:58.438 "name": "raid_bdev1", 00:17:58.438 "uuid": "14364b34-214e-4b46-b586-c1b84490f982", 00:17:58.438 "strip_size_kb": 0, 00:17:58.438 "state": "online", 00:17:58.438 "raid_level": "raid1", 00:17:58.438 "superblock": true, 00:17:58.438 "num_base_bdevs": 2, 00:17:58.438 "num_base_bdevs_discovered": 2, 00:17:58.438 "num_base_bdevs_operational": 2, 00:17:58.438 "process": { 00:17:58.438 "type": "rebuild", 00:17:58.438 "target": "spare", 00:17:58.438 "progress": { 00:17:58.438 "blocks": 22528, 00:17:58.438 "percent": 35 00:17:58.438 } 00:17:58.438 }, 00:17:58.438 "base_bdevs_list": [ 00:17:58.438 { 00:17:58.438 "name": "spare", 00:17:58.438 "uuid": "e6463535-ec37-5d41-ab2f-af4fe0609f90", 00:17:58.438 "is_configured": true, 00:17:58.438 "data_offset": 2048, 00:17:58.438 "data_size": 63488 00:17:58.438 }, 00:17:58.438 { 00:17:58.438 "name": "BaseBdev2", 00:17:58.438 "uuid": "a765760d-e9a1-54fc-9c69-f5ef177e0336", 00:17:58.438 "is_configured": true, 00:17:58.438 "data_offset": 2048, 00:17:58.438 "data_size": 63488 00:17:58.438 } 00:17:58.438 ] 00:17:58.438 }' 00:17:58.697 13:37:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:58.697 13:37:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:58.697 13:37:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:58.697 13:37:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:58.697 13:37:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:59.632 13:37:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:59.632 13:37:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:59.632 13:37:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:59.632 13:37:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:59.632 13:37:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:59.632 13:37:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:59.632 13:37:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:59.632 13:37:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:59.632 13:37:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.632 13:37:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:59.632 13:37:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.632 13:37:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:59.632 "name": "raid_bdev1", 00:17:59.632 "uuid": "14364b34-214e-4b46-b586-c1b84490f982", 00:17:59.632 "strip_size_kb": 0, 00:17:59.632 "state": "online", 00:17:59.632 "raid_level": "raid1", 00:17:59.632 "superblock": true, 00:17:59.632 "num_base_bdevs": 2, 00:17:59.632 "num_base_bdevs_discovered": 2, 00:17:59.632 "num_base_bdevs_operational": 2, 00:17:59.632 "process": { 00:17:59.632 "type": "rebuild", 00:17:59.632 "target": "spare", 00:17:59.632 "progress": { 00:17:59.632 "blocks": 45056, 00:17:59.632 "percent": 70 00:17:59.632 } 00:17:59.632 }, 00:17:59.632 "base_bdevs_list": [ 00:17:59.632 { 00:17:59.632 "name": "spare", 00:17:59.632 "uuid": "e6463535-ec37-5d41-ab2f-af4fe0609f90", 00:17:59.632 "is_configured": true, 00:17:59.632 "data_offset": 2048, 00:17:59.632 "data_size": 63488 00:17:59.632 }, 00:17:59.632 { 00:17:59.632 "name": "BaseBdev2", 00:17:59.632 "uuid": "a765760d-e9a1-54fc-9c69-f5ef177e0336", 00:17:59.632 "is_configured": true, 00:17:59.633 "data_offset": 2048, 00:17:59.633 "data_size": 63488 00:17:59.633 } 00:17:59.633 ] 00:17:59.633 }' 00:17:59.633 13:37:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:59.633 13:37:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:59.633 13:37:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:59.891 13:37:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:59.891 13:37:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:00.458 [2024-11-20 13:37:59.838733] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:18:00.458 [2024-11-20 13:37:59.838817] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:18:00.458 [2024-11-20 13:37:59.838941] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:00.717 13:38:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:00.717 13:38:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:00.717 13:38:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:00.717 13:38:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:00.717 13:38:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:00.717 13:38:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:00.717 13:38:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:00.717 13:38:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:00.717 13:38:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.717 13:38:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:00.717 13:38:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.976 13:38:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:00.976 "name": "raid_bdev1", 00:18:00.976 "uuid": "14364b34-214e-4b46-b586-c1b84490f982", 00:18:00.976 "strip_size_kb": 0, 00:18:00.976 "state": "online", 00:18:00.976 "raid_level": "raid1", 00:18:00.976 "superblock": true, 00:18:00.976 "num_base_bdevs": 2, 00:18:00.976 "num_base_bdevs_discovered": 2, 00:18:00.976 "num_base_bdevs_operational": 2, 00:18:00.976 "base_bdevs_list": [ 00:18:00.976 { 00:18:00.976 "name": "spare", 00:18:00.976 "uuid": "e6463535-ec37-5d41-ab2f-af4fe0609f90", 00:18:00.976 "is_configured": true, 00:18:00.976 "data_offset": 2048, 00:18:00.976 "data_size": 63488 00:18:00.976 }, 00:18:00.976 { 00:18:00.976 "name": "BaseBdev2", 00:18:00.976 "uuid": "a765760d-e9a1-54fc-9c69-f5ef177e0336", 00:18:00.976 "is_configured": true, 00:18:00.976 "data_offset": 2048, 00:18:00.976 "data_size": 63488 00:18:00.976 } 00:18:00.976 ] 00:18:00.976 }' 00:18:00.976 13:38:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:00.976 13:38:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:18:00.976 13:38:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:00.976 13:38:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:18:00.976 13:38:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:18:00.976 13:38:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:00.976 13:38:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:00.976 13:38:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:00.976 13:38:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:00.976 13:38:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:00.976 13:38:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:00.976 13:38:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:00.976 13:38:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.976 13:38:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:00.976 13:38:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.976 13:38:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:00.976 "name": "raid_bdev1", 00:18:00.976 "uuid": "14364b34-214e-4b46-b586-c1b84490f982", 00:18:00.976 "strip_size_kb": 0, 00:18:00.976 "state": "online", 00:18:00.976 "raid_level": "raid1", 00:18:00.976 "superblock": true, 00:18:00.976 "num_base_bdevs": 2, 00:18:00.976 "num_base_bdevs_discovered": 2, 00:18:00.976 "num_base_bdevs_operational": 2, 00:18:00.977 "base_bdevs_list": [ 00:18:00.977 { 00:18:00.977 "name": "spare", 00:18:00.977 "uuid": "e6463535-ec37-5d41-ab2f-af4fe0609f90", 00:18:00.977 "is_configured": true, 00:18:00.977 "data_offset": 2048, 00:18:00.977 "data_size": 63488 00:18:00.977 }, 00:18:00.977 { 00:18:00.977 "name": "BaseBdev2", 00:18:00.977 "uuid": "a765760d-e9a1-54fc-9c69-f5ef177e0336", 00:18:00.977 "is_configured": true, 00:18:00.977 "data_offset": 2048, 00:18:00.977 "data_size": 63488 00:18:00.977 } 00:18:00.977 ] 00:18:00.977 }' 00:18:00.977 13:38:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:00.977 13:38:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:00.977 13:38:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:00.977 13:38:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:00.977 13:38:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:00.977 13:38:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:00.977 13:38:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:00.977 13:38:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:00.977 13:38:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:00.977 13:38:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:00.977 13:38:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:00.977 13:38:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:00.977 13:38:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:00.977 13:38:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:00.977 13:38:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:00.977 13:38:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:00.977 13:38:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.977 13:38:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:00.977 13:38:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.977 13:38:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:00.977 "name": "raid_bdev1", 00:18:00.977 "uuid": "14364b34-214e-4b46-b586-c1b84490f982", 00:18:00.977 "strip_size_kb": 0, 00:18:00.977 "state": "online", 00:18:00.977 "raid_level": "raid1", 00:18:00.977 "superblock": true, 00:18:00.977 "num_base_bdevs": 2, 00:18:00.977 "num_base_bdevs_discovered": 2, 00:18:00.977 "num_base_bdevs_operational": 2, 00:18:00.977 "base_bdevs_list": [ 00:18:00.977 { 00:18:00.977 "name": "spare", 00:18:00.977 "uuid": "e6463535-ec37-5d41-ab2f-af4fe0609f90", 00:18:00.977 "is_configured": true, 00:18:00.977 "data_offset": 2048, 00:18:00.977 "data_size": 63488 00:18:00.977 }, 00:18:00.977 { 00:18:00.977 "name": "BaseBdev2", 00:18:00.977 "uuid": "a765760d-e9a1-54fc-9c69-f5ef177e0336", 00:18:00.977 "is_configured": true, 00:18:00.977 "data_offset": 2048, 00:18:00.977 "data_size": 63488 00:18:00.977 } 00:18:00.977 ] 00:18:00.977 }' 00:18:00.977 13:38:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:00.977 13:38:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:01.543 13:38:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:01.543 13:38:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.543 13:38:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:01.543 [2024-11-20 13:38:00.811141] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:01.543 [2024-11-20 13:38:00.811178] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:01.543 [2024-11-20 13:38:00.811263] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:01.543 [2024-11-20 13:38:00.811333] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:01.543 [2024-11-20 13:38:00.811345] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:18:01.543 13:38:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.543 13:38:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:01.543 13:38:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:18:01.543 13:38:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.543 13:38:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:01.543 13:38:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.543 13:38:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:18:01.543 13:38:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:18:01.543 13:38:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:18:01.543 13:38:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:18:01.543 13:38:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:18:01.543 13:38:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:18:01.543 13:38:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:01.543 13:38:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:01.543 13:38:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:01.543 13:38:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:18:01.543 13:38:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:01.543 13:38:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:01.543 13:38:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:18:01.801 /dev/nbd0 00:18:01.801 13:38:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:01.801 13:38:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:01.801 13:38:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:18:01.801 13:38:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:18:01.801 13:38:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:01.801 13:38:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:01.801 13:38:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:18:01.801 13:38:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:18:01.801 13:38:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:01.801 13:38:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:01.801 13:38:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:01.801 1+0 records in 00:18:01.801 1+0 records out 00:18:01.801 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000240777 s, 17.0 MB/s 00:18:01.801 13:38:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:01.801 13:38:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:18:01.801 13:38:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:01.801 13:38:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:01.801 13:38:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:18:01.801 13:38:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:01.801 13:38:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:01.801 13:38:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:18:02.060 /dev/nbd1 00:18:02.060 13:38:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:18:02.060 13:38:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:18:02.060 13:38:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:18:02.060 13:38:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:18:02.060 13:38:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:02.060 13:38:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:02.060 13:38:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:18:02.060 13:38:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:18:02.060 13:38:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:02.060 13:38:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:02.060 13:38:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:02.060 1+0 records in 00:18:02.060 1+0 records out 00:18:02.060 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000394505 s, 10.4 MB/s 00:18:02.060 13:38:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:02.060 13:38:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:18:02.060 13:38:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:02.060 13:38:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:02.060 13:38:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:18:02.060 13:38:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:02.060 13:38:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:02.060 13:38:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:18:02.319 13:38:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:18:02.319 13:38:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:18:02.319 13:38:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:02.319 13:38:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:02.319 13:38:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:18:02.319 13:38:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:02.319 13:38:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:18:02.578 13:38:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:02.578 13:38:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:02.578 13:38:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:02.578 13:38:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:02.578 13:38:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:02.578 13:38:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:02.578 13:38:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:18:02.578 13:38:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:18:02.578 13:38:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:02.578 13:38:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:18:02.838 13:38:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:18:02.838 13:38:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:18:02.838 13:38:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:18:02.838 13:38:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:02.838 13:38:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:02.838 13:38:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:18:02.838 13:38:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:18:02.838 13:38:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:18:02.838 13:38:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:18:02.838 13:38:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:18:02.838 13:38:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.838 13:38:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:02.838 13:38:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.838 13:38:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:02.838 13:38:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.838 13:38:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:02.838 [2024-11-20 13:38:02.120946] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:02.838 [2024-11-20 13:38:02.121005] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:02.838 [2024-11-20 13:38:02.121032] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:18:02.838 [2024-11-20 13:38:02.121045] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:02.838 [2024-11-20 13:38:02.123922] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:02.838 [2024-11-20 13:38:02.124082] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:02.838 [2024-11-20 13:38:02.124198] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:18:02.838 [2024-11-20 13:38:02.124254] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:02.838 [2024-11-20 13:38:02.124407] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:02.838 spare 00:18:02.838 13:38:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.838 13:38:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:18:02.838 13:38:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.838 13:38:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:02.838 [2024-11-20 13:38:02.224340] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:18:02.838 [2024-11-20 13:38:02.224395] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:18:02.838 [2024-11-20 13:38:02.224779] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1ae0 00:18:02.838 [2024-11-20 13:38:02.225000] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:18:02.838 [2024-11-20 13:38:02.225019] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:18:02.838 [2024-11-20 13:38:02.225248] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:02.838 13:38:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.838 13:38:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:02.838 13:38:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:02.838 13:38:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:02.838 13:38:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:02.838 13:38:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:02.838 13:38:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:02.838 13:38:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:02.838 13:38:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:02.838 13:38:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:02.838 13:38:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:02.838 13:38:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:02.838 13:38:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.838 13:38:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:02.838 13:38:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:02.838 13:38:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.838 13:38:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:02.838 "name": "raid_bdev1", 00:18:02.838 "uuid": "14364b34-214e-4b46-b586-c1b84490f982", 00:18:02.838 "strip_size_kb": 0, 00:18:02.838 "state": "online", 00:18:02.838 "raid_level": "raid1", 00:18:02.838 "superblock": true, 00:18:02.838 "num_base_bdevs": 2, 00:18:02.838 "num_base_bdevs_discovered": 2, 00:18:02.838 "num_base_bdevs_operational": 2, 00:18:02.838 "base_bdevs_list": [ 00:18:02.838 { 00:18:02.838 "name": "spare", 00:18:02.838 "uuid": "e6463535-ec37-5d41-ab2f-af4fe0609f90", 00:18:02.838 "is_configured": true, 00:18:02.838 "data_offset": 2048, 00:18:02.838 "data_size": 63488 00:18:02.838 }, 00:18:02.838 { 00:18:02.838 "name": "BaseBdev2", 00:18:02.838 "uuid": "a765760d-e9a1-54fc-9c69-f5ef177e0336", 00:18:02.838 "is_configured": true, 00:18:02.838 "data_offset": 2048, 00:18:02.838 "data_size": 63488 00:18:02.838 } 00:18:02.838 ] 00:18:02.838 }' 00:18:02.838 13:38:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:02.838 13:38:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:03.406 13:38:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:03.406 13:38:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:03.406 13:38:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:03.406 13:38:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:03.406 13:38:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:03.406 13:38:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:03.406 13:38:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.406 13:38:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:03.406 13:38:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:03.406 13:38:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.406 13:38:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:03.406 "name": "raid_bdev1", 00:18:03.406 "uuid": "14364b34-214e-4b46-b586-c1b84490f982", 00:18:03.406 "strip_size_kb": 0, 00:18:03.406 "state": "online", 00:18:03.406 "raid_level": "raid1", 00:18:03.406 "superblock": true, 00:18:03.406 "num_base_bdevs": 2, 00:18:03.406 "num_base_bdevs_discovered": 2, 00:18:03.407 "num_base_bdevs_operational": 2, 00:18:03.407 "base_bdevs_list": [ 00:18:03.407 { 00:18:03.407 "name": "spare", 00:18:03.407 "uuid": "e6463535-ec37-5d41-ab2f-af4fe0609f90", 00:18:03.407 "is_configured": true, 00:18:03.407 "data_offset": 2048, 00:18:03.407 "data_size": 63488 00:18:03.407 }, 00:18:03.407 { 00:18:03.407 "name": "BaseBdev2", 00:18:03.407 "uuid": "a765760d-e9a1-54fc-9c69-f5ef177e0336", 00:18:03.407 "is_configured": true, 00:18:03.407 "data_offset": 2048, 00:18:03.407 "data_size": 63488 00:18:03.407 } 00:18:03.407 ] 00:18:03.407 }' 00:18:03.407 13:38:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:03.407 13:38:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:03.407 13:38:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:03.407 13:38:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:03.407 13:38:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:03.407 13:38:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:18:03.407 13:38:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.407 13:38:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:03.407 13:38:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.407 13:38:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:18:03.407 13:38:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:18:03.407 13:38:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.407 13:38:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:03.407 [2024-11-20 13:38:02.840335] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:03.407 13:38:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.407 13:38:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:03.407 13:38:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:03.407 13:38:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:03.407 13:38:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:03.407 13:38:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:03.407 13:38:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:03.407 13:38:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:03.407 13:38:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:03.407 13:38:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:03.407 13:38:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:03.407 13:38:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:03.407 13:38:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.407 13:38:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:03.407 13:38:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:03.407 13:38:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.666 13:38:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:03.666 "name": "raid_bdev1", 00:18:03.666 "uuid": "14364b34-214e-4b46-b586-c1b84490f982", 00:18:03.666 "strip_size_kb": 0, 00:18:03.666 "state": "online", 00:18:03.666 "raid_level": "raid1", 00:18:03.666 "superblock": true, 00:18:03.666 "num_base_bdevs": 2, 00:18:03.666 "num_base_bdevs_discovered": 1, 00:18:03.666 "num_base_bdevs_operational": 1, 00:18:03.666 "base_bdevs_list": [ 00:18:03.666 { 00:18:03.666 "name": null, 00:18:03.666 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:03.666 "is_configured": false, 00:18:03.666 "data_offset": 0, 00:18:03.666 "data_size": 63488 00:18:03.666 }, 00:18:03.666 { 00:18:03.666 "name": "BaseBdev2", 00:18:03.666 "uuid": "a765760d-e9a1-54fc-9c69-f5ef177e0336", 00:18:03.666 "is_configured": true, 00:18:03.666 "data_offset": 2048, 00:18:03.666 "data_size": 63488 00:18:03.666 } 00:18:03.666 ] 00:18:03.666 }' 00:18:03.666 13:38:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:03.666 13:38:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:03.925 13:38:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:03.925 13:38:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.925 13:38:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:03.925 [2024-11-20 13:38:03.284123] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:03.925 [2024-11-20 13:38:03.284311] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:18:03.925 [2024-11-20 13:38:03.284331] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:18:03.925 [2024-11-20 13:38:03.284373] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:03.925 [2024-11-20 13:38:03.300746] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1bb0 00:18:03.925 13:38:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.925 13:38:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:18:03.925 [2024-11-20 13:38:03.302914] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:04.863 13:38:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:04.863 13:38:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:04.863 13:38:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:04.863 13:38:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:04.863 13:38:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:04.863 13:38:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:04.863 13:38:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.863 13:38:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:04.863 13:38:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:04.863 13:38:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.122 13:38:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:05.122 "name": "raid_bdev1", 00:18:05.122 "uuid": "14364b34-214e-4b46-b586-c1b84490f982", 00:18:05.122 "strip_size_kb": 0, 00:18:05.122 "state": "online", 00:18:05.122 "raid_level": "raid1", 00:18:05.122 "superblock": true, 00:18:05.122 "num_base_bdevs": 2, 00:18:05.122 "num_base_bdevs_discovered": 2, 00:18:05.122 "num_base_bdevs_operational": 2, 00:18:05.123 "process": { 00:18:05.123 "type": "rebuild", 00:18:05.123 "target": "spare", 00:18:05.123 "progress": { 00:18:05.123 "blocks": 20480, 00:18:05.123 "percent": 32 00:18:05.123 } 00:18:05.123 }, 00:18:05.123 "base_bdevs_list": [ 00:18:05.123 { 00:18:05.123 "name": "spare", 00:18:05.123 "uuid": "e6463535-ec37-5d41-ab2f-af4fe0609f90", 00:18:05.123 "is_configured": true, 00:18:05.123 "data_offset": 2048, 00:18:05.123 "data_size": 63488 00:18:05.123 }, 00:18:05.123 { 00:18:05.123 "name": "BaseBdev2", 00:18:05.123 "uuid": "a765760d-e9a1-54fc-9c69-f5ef177e0336", 00:18:05.123 "is_configured": true, 00:18:05.123 "data_offset": 2048, 00:18:05.123 "data_size": 63488 00:18:05.123 } 00:18:05.123 ] 00:18:05.123 }' 00:18:05.123 13:38:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:05.123 13:38:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:05.123 13:38:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:05.123 13:38:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:05.123 13:38:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:18:05.123 13:38:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.123 13:38:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:05.123 [2024-11-20 13:38:04.446777] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:05.123 [2024-11-20 13:38:04.508227] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:05.123 [2024-11-20 13:38:04.508462] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:05.123 [2024-11-20 13:38:04.508484] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:05.123 [2024-11-20 13:38:04.508498] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:05.123 13:38:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.123 13:38:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:05.123 13:38:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:05.123 13:38:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:05.123 13:38:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:05.123 13:38:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:05.123 13:38:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:05.123 13:38:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:05.123 13:38:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:05.123 13:38:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:05.123 13:38:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:05.123 13:38:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:05.123 13:38:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:05.123 13:38:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.123 13:38:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:05.123 13:38:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.123 13:38:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:05.123 "name": "raid_bdev1", 00:18:05.123 "uuid": "14364b34-214e-4b46-b586-c1b84490f982", 00:18:05.123 "strip_size_kb": 0, 00:18:05.123 "state": "online", 00:18:05.123 "raid_level": "raid1", 00:18:05.123 "superblock": true, 00:18:05.123 "num_base_bdevs": 2, 00:18:05.123 "num_base_bdevs_discovered": 1, 00:18:05.123 "num_base_bdevs_operational": 1, 00:18:05.123 "base_bdevs_list": [ 00:18:05.123 { 00:18:05.123 "name": null, 00:18:05.123 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:05.123 "is_configured": false, 00:18:05.123 "data_offset": 0, 00:18:05.123 "data_size": 63488 00:18:05.123 }, 00:18:05.123 { 00:18:05.123 "name": "BaseBdev2", 00:18:05.123 "uuid": "a765760d-e9a1-54fc-9c69-f5ef177e0336", 00:18:05.123 "is_configured": true, 00:18:05.123 "data_offset": 2048, 00:18:05.123 "data_size": 63488 00:18:05.123 } 00:18:05.123 ] 00:18:05.123 }' 00:18:05.123 13:38:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:05.123 13:38:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:05.690 13:38:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:05.690 13:38:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.690 13:38:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:05.690 [2024-11-20 13:38:04.935533] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:05.690 [2024-11-20 13:38:04.935736] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:05.691 [2024-11-20 13:38:04.935768] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:18:05.691 [2024-11-20 13:38:04.935783] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:05.691 [2024-11-20 13:38:04.936267] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:05.691 [2024-11-20 13:38:04.936299] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:05.691 [2024-11-20 13:38:04.936399] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:18:05.691 [2024-11-20 13:38:04.936416] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:18:05.691 [2024-11-20 13:38:04.936427] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:18:05.691 [2024-11-20 13:38:04.936460] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:05.691 [2024-11-20 13:38:04.952410] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1c80 00:18:05.691 spare 00:18:05.691 13:38:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.691 13:38:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:18:05.691 [2024-11-20 13:38:04.954478] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:06.626 13:38:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:06.626 13:38:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:06.626 13:38:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:06.626 13:38:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:06.626 13:38:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:06.626 13:38:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:06.626 13:38:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.626 13:38:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:06.626 13:38:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:06.626 13:38:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.626 13:38:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:06.626 "name": "raid_bdev1", 00:18:06.626 "uuid": "14364b34-214e-4b46-b586-c1b84490f982", 00:18:06.626 "strip_size_kb": 0, 00:18:06.626 "state": "online", 00:18:06.626 "raid_level": "raid1", 00:18:06.626 "superblock": true, 00:18:06.626 "num_base_bdevs": 2, 00:18:06.626 "num_base_bdevs_discovered": 2, 00:18:06.626 "num_base_bdevs_operational": 2, 00:18:06.626 "process": { 00:18:06.626 "type": "rebuild", 00:18:06.626 "target": "spare", 00:18:06.626 "progress": { 00:18:06.626 "blocks": 20480, 00:18:06.626 "percent": 32 00:18:06.626 } 00:18:06.626 }, 00:18:06.626 "base_bdevs_list": [ 00:18:06.626 { 00:18:06.626 "name": "spare", 00:18:06.626 "uuid": "e6463535-ec37-5d41-ab2f-af4fe0609f90", 00:18:06.626 "is_configured": true, 00:18:06.626 "data_offset": 2048, 00:18:06.626 "data_size": 63488 00:18:06.626 }, 00:18:06.626 { 00:18:06.626 "name": "BaseBdev2", 00:18:06.626 "uuid": "a765760d-e9a1-54fc-9c69-f5ef177e0336", 00:18:06.626 "is_configured": true, 00:18:06.626 "data_offset": 2048, 00:18:06.626 "data_size": 63488 00:18:06.626 } 00:18:06.626 ] 00:18:06.626 }' 00:18:06.626 13:38:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:06.626 13:38:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:06.626 13:38:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:06.626 13:38:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:06.626 13:38:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:18:06.626 13:38:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.626 13:38:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:06.626 [2024-11-20 13:38:06.098452] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:06.885 [2024-11-20 13:38:06.159708] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:06.885 [2024-11-20 13:38:06.159768] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:06.885 [2024-11-20 13:38:06.159788] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:06.885 [2024-11-20 13:38:06.159796] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:06.885 13:38:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.885 13:38:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:06.885 13:38:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:06.885 13:38:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:06.885 13:38:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:06.885 13:38:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:06.885 13:38:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:06.885 13:38:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:06.885 13:38:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:06.885 13:38:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:06.885 13:38:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:06.885 13:38:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:06.885 13:38:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:06.885 13:38:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.885 13:38:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:06.885 13:38:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.885 13:38:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:06.885 "name": "raid_bdev1", 00:18:06.885 "uuid": "14364b34-214e-4b46-b586-c1b84490f982", 00:18:06.885 "strip_size_kb": 0, 00:18:06.885 "state": "online", 00:18:06.885 "raid_level": "raid1", 00:18:06.885 "superblock": true, 00:18:06.885 "num_base_bdevs": 2, 00:18:06.885 "num_base_bdevs_discovered": 1, 00:18:06.885 "num_base_bdevs_operational": 1, 00:18:06.885 "base_bdevs_list": [ 00:18:06.885 { 00:18:06.885 "name": null, 00:18:06.885 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:06.885 "is_configured": false, 00:18:06.885 "data_offset": 0, 00:18:06.885 "data_size": 63488 00:18:06.885 }, 00:18:06.885 { 00:18:06.885 "name": "BaseBdev2", 00:18:06.885 "uuid": "a765760d-e9a1-54fc-9c69-f5ef177e0336", 00:18:06.885 "is_configured": true, 00:18:06.885 "data_offset": 2048, 00:18:06.885 "data_size": 63488 00:18:06.885 } 00:18:06.885 ] 00:18:06.885 }' 00:18:06.885 13:38:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:06.885 13:38:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:07.144 13:38:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:07.144 13:38:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:07.144 13:38:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:07.144 13:38:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:07.144 13:38:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:07.402 13:38:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:07.402 13:38:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.402 13:38:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:07.402 13:38:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:07.402 13:38:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.403 13:38:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:07.403 "name": "raid_bdev1", 00:18:07.403 "uuid": "14364b34-214e-4b46-b586-c1b84490f982", 00:18:07.403 "strip_size_kb": 0, 00:18:07.403 "state": "online", 00:18:07.403 "raid_level": "raid1", 00:18:07.403 "superblock": true, 00:18:07.403 "num_base_bdevs": 2, 00:18:07.403 "num_base_bdevs_discovered": 1, 00:18:07.403 "num_base_bdevs_operational": 1, 00:18:07.403 "base_bdevs_list": [ 00:18:07.403 { 00:18:07.403 "name": null, 00:18:07.403 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:07.403 "is_configured": false, 00:18:07.403 "data_offset": 0, 00:18:07.403 "data_size": 63488 00:18:07.403 }, 00:18:07.403 { 00:18:07.403 "name": "BaseBdev2", 00:18:07.403 "uuid": "a765760d-e9a1-54fc-9c69-f5ef177e0336", 00:18:07.403 "is_configured": true, 00:18:07.403 "data_offset": 2048, 00:18:07.403 "data_size": 63488 00:18:07.403 } 00:18:07.403 ] 00:18:07.403 }' 00:18:07.403 13:38:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:07.403 13:38:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:07.403 13:38:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:07.403 13:38:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:07.403 13:38:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:18:07.403 13:38:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.403 13:38:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:07.403 13:38:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.403 13:38:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:07.403 13:38:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.403 13:38:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:07.403 [2024-11-20 13:38:06.791597] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:07.403 [2024-11-20 13:38:06.791663] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:07.403 [2024-11-20 13:38:06.791696] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:18:07.403 [2024-11-20 13:38:06.791719] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:07.403 [2024-11-20 13:38:06.792203] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:07.403 [2024-11-20 13:38:06.792351] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:07.403 [2024-11-20 13:38:06.792470] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:18:07.403 [2024-11-20 13:38:06.792487] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:18:07.403 [2024-11-20 13:38:06.792499] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:18:07.403 [2024-11-20 13:38:06.792511] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:18:07.403 BaseBdev1 00:18:07.403 13:38:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.403 13:38:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:18:08.338 13:38:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:08.338 13:38:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:08.338 13:38:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:08.338 13:38:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:08.338 13:38:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:08.338 13:38:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:08.338 13:38:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:08.338 13:38:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:08.338 13:38:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:08.339 13:38:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:08.339 13:38:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:08.339 13:38:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:08.339 13:38:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.339 13:38:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:08.597 13:38:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.597 13:38:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:08.597 "name": "raid_bdev1", 00:18:08.597 "uuid": "14364b34-214e-4b46-b586-c1b84490f982", 00:18:08.597 "strip_size_kb": 0, 00:18:08.597 "state": "online", 00:18:08.597 "raid_level": "raid1", 00:18:08.597 "superblock": true, 00:18:08.597 "num_base_bdevs": 2, 00:18:08.597 "num_base_bdevs_discovered": 1, 00:18:08.597 "num_base_bdevs_operational": 1, 00:18:08.597 "base_bdevs_list": [ 00:18:08.597 { 00:18:08.597 "name": null, 00:18:08.597 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:08.597 "is_configured": false, 00:18:08.597 "data_offset": 0, 00:18:08.597 "data_size": 63488 00:18:08.597 }, 00:18:08.597 { 00:18:08.597 "name": "BaseBdev2", 00:18:08.597 "uuid": "a765760d-e9a1-54fc-9c69-f5ef177e0336", 00:18:08.597 "is_configured": true, 00:18:08.597 "data_offset": 2048, 00:18:08.597 "data_size": 63488 00:18:08.597 } 00:18:08.597 ] 00:18:08.597 }' 00:18:08.597 13:38:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:08.597 13:38:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:08.856 13:38:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:08.856 13:38:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:08.856 13:38:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:08.856 13:38:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:08.856 13:38:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:08.856 13:38:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:08.856 13:38:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:08.856 13:38:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.856 13:38:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:08.856 13:38:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.856 13:38:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:08.856 "name": "raid_bdev1", 00:18:08.856 "uuid": "14364b34-214e-4b46-b586-c1b84490f982", 00:18:08.856 "strip_size_kb": 0, 00:18:08.856 "state": "online", 00:18:08.856 "raid_level": "raid1", 00:18:08.856 "superblock": true, 00:18:08.856 "num_base_bdevs": 2, 00:18:08.856 "num_base_bdevs_discovered": 1, 00:18:08.856 "num_base_bdevs_operational": 1, 00:18:08.856 "base_bdevs_list": [ 00:18:08.856 { 00:18:08.856 "name": null, 00:18:08.856 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:08.856 "is_configured": false, 00:18:08.856 "data_offset": 0, 00:18:08.856 "data_size": 63488 00:18:08.856 }, 00:18:08.856 { 00:18:08.856 "name": "BaseBdev2", 00:18:08.856 "uuid": "a765760d-e9a1-54fc-9c69-f5ef177e0336", 00:18:08.856 "is_configured": true, 00:18:08.856 "data_offset": 2048, 00:18:08.856 "data_size": 63488 00:18:08.856 } 00:18:08.856 ] 00:18:08.856 }' 00:18:08.856 13:38:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:08.856 13:38:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:08.856 13:38:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:08.856 13:38:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:08.856 13:38:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:08.856 13:38:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:18:08.856 13:38:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:08.856 13:38:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:18:08.856 13:38:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:08.856 13:38:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:18:08.856 13:38:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:08.856 13:38:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:08.856 13:38:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.856 13:38:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:08.856 [2024-11-20 13:38:08.302430] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:08.856 [2024-11-20 13:38:08.302601] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:18:08.856 [2024-11-20 13:38:08.302624] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:18:08.856 request: 00:18:08.856 { 00:18:08.856 "base_bdev": "BaseBdev1", 00:18:08.856 "raid_bdev": "raid_bdev1", 00:18:08.856 "method": "bdev_raid_add_base_bdev", 00:18:08.856 "req_id": 1 00:18:08.856 } 00:18:08.856 Got JSON-RPC error response 00:18:08.856 response: 00:18:08.856 { 00:18:08.856 "code": -22, 00:18:08.856 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:18:08.856 } 00:18:08.856 13:38:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:18:08.856 13:38:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:18:08.856 13:38:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:08.856 13:38:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:08.856 13:38:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:08.856 13:38:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:18:10.237 13:38:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:10.237 13:38:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:10.237 13:38:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:10.237 13:38:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:10.237 13:38:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:10.237 13:38:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:10.237 13:38:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:10.237 13:38:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:10.237 13:38:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:10.237 13:38:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:10.237 13:38:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:10.237 13:38:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:10.237 13:38:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.237 13:38:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:10.237 13:38:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.237 13:38:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:10.237 "name": "raid_bdev1", 00:18:10.237 "uuid": "14364b34-214e-4b46-b586-c1b84490f982", 00:18:10.237 "strip_size_kb": 0, 00:18:10.237 "state": "online", 00:18:10.237 "raid_level": "raid1", 00:18:10.237 "superblock": true, 00:18:10.237 "num_base_bdevs": 2, 00:18:10.237 "num_base_bdevs_discovered": 1, 00:18:10.237 "num_base_bdevs_operational": 1, 00:18:10.237 "base_bdevs_list": [ 00:18:10.237 { 00:18:10.237 "name": null, 00:18:10.237 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:10.237 "is_configured": false, 00:18:10.237 "data_offset": 0, 00:18:10.237 "data_size": 63488 00:18:10.237 }, 00:18:10.237 { 00:18:10.237 "name": "BaseBdev2", 00:18:10.237 "uuid": "a765760d-e9a1-54fc-9c69-f5ef177e0336", 00:18:10.237 "is_configured": true, 00:18:10.237 "data_offset": 2048, 00:18:10.237 "data_size": 63488 00:18:10.237 } 00:18:10.237 ] 00:18:10.237 }' 00:18:10.237 13:38:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:10.237 13:38:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:10.496 13:38:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:10.496 13:38:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:10.496 13:38:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:10.496 13:38:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:10.496 13:38:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:10.496 13:38:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:10.496 13:38:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:10.496 13:38:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.496 13:38:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:10.496 13:38:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.496 13:38:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:10.496 "name": "raid_bdev1", 00:18:10.496 "uuid": "14364b34-214e-4b46-b586-c1b84490f982", 00:18:10.496 "strip_size_kb": 0, 00:18:10.496 "state": "online", 00:18:10.496 "raid_level": "raid1", 00:18:10.496 "superblock": true, 00:18:10.496 "num_base_bdevs": 2, 00:18:10.496 "num_base_bdevs_discovered": 1, 00:18:10.496 "num_base_bdevs_operational": 1, 00:18:10.496 "base_bdevs_list": [ 00:18:10.496 { 00:18:10.496 "name": null, 00:18:10.496 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:10.496 "is_configured": false, 00:18:10.496 "data_offset": 0, 00:18:10.496 "data_size": 63488 00:18:10.496 }, 00:18:10.496 { 00:18:10.496 "name": "BaseBdev2", 00:18:10.496 "uuid": "a765760d-e9a1-54fc-9c69-f5ef177e0336", 00:18:10.496 "is_configured": true, 00:18:10.496 "data_offset": 2048, 00:18:10.496 "data_size": 63488 00:18:10.496 } 00:18:10.496 ] 00:18:10.496 }' 00:18:10.496 13:38:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:10.496 13:38:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:10.496 13:38:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:10.496 13:38:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:10.496 13:38:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 75480 00:18:10.496 13:38:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 75480 ']' 00:18:10.496 13:38:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 75480 00:18:10.496 13:38:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:18:10.496 13:38:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:10.496 13:38:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75480 00:18:10.496 killing process with pid 75480 00:18:10.496 Received shutdown signal, test time was about 60.000000 seconds 00:18:10.496 00:18:10.496 Latency(us) 00:18:10.496 [2024-11-20T13:38:09.981Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:10.496 [2024-11-20T13:38:09.981Z] =================================================================================================================== 00:18:10.496 [2024-11-20T13:38:09.981Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:10.496 13:38:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:10.496 13:38:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:10.496 13:38:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75480' 00:18:10.496 13:38:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 75480 00:18:10.496 [2024-11-20 13:38:09.905737] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:10.496 13:38:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 75480 00:18:10.496 [2024-11-20 13:38:09.905867] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:10.496 [2024-11-20 13:38:09.905916] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:10.497 [2024-11-20 13:38:09.905930] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:18:10.755 [2024-11-20 13:38:10.207723] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:12.130 13:38:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:18:12.130 00:18:12.130 real 0m24.567s 00:18:12.130 user 0m28.998s 00:18:12.130 sys 0m4.690s 00:18:12.130 13:38:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:12.130 13:38:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:12.130 ************************************ 00:18:12.131 END TEST raid_rebuild_test_sb 00:18:12.131 ************************************ 00:18:12.131 13:38:11 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 2 false true true 00:18:12.131 13:38:11 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:18:12.131 13:38:11 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:12.131 13:38:11 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:12.131 ************************************ 00:18:12.131 START TEST raid_rebuild_test_io 00:18:12.131 ************************************ 00:18:12.131 13:38:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 false true true 00:18:12.131 13:38:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:18:12.131 13:38:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:18:12.131 13:38:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:18:12.131 13:38:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:18:12.131 13:38:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:18:12.131 13:38:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:18:12.131 13:38:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:12.131 13:38:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:18:12.131 13:38:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:12.131 13:38:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:12.131 13:38:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:18:12.131 13:38:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:12.131 13:38:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:12.131 13:38:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:18:12.131 13:38:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:18:12.131 13:38:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:18:12.131 13:38:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:18:12.131 13:38:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:18:12.131 13:38:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:18:12.131 13:38:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:18:12.131 13:38:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:18:12.131 13:38:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:18:12.131 13:38:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:18:12.131 13:38:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=76232 00:18:12.131 13:38:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 76232 00:18:12.131 13:38:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:18:12.131 13:38:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@835 -- # '[' -z 76232 ']' 00:18:12.131 13:38:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:12.131 13:38:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:12.131 13:38:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:12.131 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:12.131 13:38:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:12.131 13:38:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:12.131 [2024-11-20 13:38:11.573635] Starting SPDK v25.01-pre git sha1 82b85d9ca / DPDK 24.03.0 initialization... 00:18:12.131 I/O size of 3145728 is greater than zero copy threshold (65536). 00:18:12.131 Zero copy mechanism will not be used. 00:18:12.131 [2024-11-20 13:38:11.573798] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76232 ] 00:18:12.390 [2024-11-20 13:38:11.782125] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:12.649 [2024-11-20 13:38:11.907609] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:12.649 [2024-11-20 13:38:12.131113] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:12.649 [2024-11-20 13:38:12.131187] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:13.217 13:38:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:13.217 13:38:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # return 0 00:18:13.217 13:38:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:13.217 13:38:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:18:13.217 13:38:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.217 13:38:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:13.217 BaseBdev1_malloc 00:18:13.217 13:38:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.217 13:38:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:13.217 13:38:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.217 13:38:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:13.217 [2024-11-20 13:38:12.480535] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:13.217 [2024-11-20 13:38:12.480603] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:13.217 [2024-11-20 13:38:12.480627] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:18:13.217 [2024-11-20 13:38:12.480643] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:13.217 [2024-11-20 13:38:12.483208] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:13.217 [2024-11-20 13:38:12.483260] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:13.217 BaseBdev1 00:18:13.217 13:38:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.217 13:38:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:13.217 13:38:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:18:13.217 13:38:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.217 13:38:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:13.217 BaseBdev2_malloc 00:18:13.217 13:38:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.217 13:38:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:18:13.217 13:38:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.217 13:38:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:13.217 [2024-11-20 13:38:12.531453] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:18:13.217 [2024-11-20 13:38:12.531518] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:13.217 [2024-11-20 13:38:12.531547] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:18:13.217 [2024-11-20 13:38:12.531563] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:13.217 [2024-11-20 13:38:12.534103] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:13.217 [2024-11-20 13:38:12.534147] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:18:13.217 BaseBdev2 00:18:13.217 13:38:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.217 13:38:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:18:13.217 13:38:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.217 13:38:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:13.217 spare_malloc 00:18:13.217 13:38:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.217 13:38:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:18:13.217 13:38:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.217 13:38:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:13.217 spare_delay 00:18:13.217 13:38:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.217 13:38:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:13.217 13:38:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.217 13:38:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:13.218 [2024-11-20 13:38:12.605530] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:13.218 [2024-11-20 13:38:12.605594] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:13.218 [2024-11-20 13:38:12.605617] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:18:13.218 [2024-11-20 13:38:12.605633] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:13.218 [2024-11-20 13:38:12.608167] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:13.218 [2024-11-20 13:38:12.608211] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:13.218 spare 00:18:13.218 13:38:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.218 13:38:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:18:13.218 13:38:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.218 13:38:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:13.218 [2024-11-20 13:38:12.617572] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:13.218 [2024-11-20 13:38:12.619760] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:13.218 [2024-11-20 13:38:12.619862] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:18:13.218 [2024-11-20 13:38:12.619880] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:18:13.218 [2024-11-20 13:38:12.620179] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:18:13.218 [2024-11-20 13:38:12.620356] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:18:13.218 [2024-11-20 13:38:12.620376] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:18:13.218 [2024-11-20 13:38:12.620533] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:13.218 13:38:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.218 13:38:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:13.218 13:38:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:13.218 13:38:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:13.218 13:38:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:13.218 13:38:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:13.218 13:38:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:13.218 13:38:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:13.218 13:38:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:13.218 13:38:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:13.218 13:38:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:13.218 13:38:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:13.218 13:38:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.218 13:38:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:13.218 13:38:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:13.218 13:38:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.218 13:38:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:13.218 "name": "raid_bdev1", 00:18:13.218 "uuid": "15176db3-186e-4d17-97dd-e35e44f5facf", 00:18:13.218 "strip_size_kb": 0, 00:18:13.218 "state": "online", 00:18:13.218 "raid_level": "raid1", 00:18:13.218 "superblock": false, 00:18:13.218 "num_base_bdevs": 2, 00:18:13.218 "num_base_bdevs_discovered": 2, 00:18:13.218 "num_base_bdevs_operational": 2, 00:18:13.218 "base_bdevs_list": [ 00:18:13.218 { 00:18:13.218 "name": "BaseBdev1", 00:18:13.218 "uuid": "f4bcb0cb-d739-5e8b-8f37-2eb962bf4052", 00:18:13.218 "is_configured": true, 00:18:13.218 "data_offset": 0, 00:18:13.218 "data_size": 65536 00:18:13.218 }, 00:18:13.218 { 00:18:13.218 "name": "BaseBdev2", 00:18:13.218 "uuid": "2ee254a8-8396-56a5-9883-06a868883c9a", 00:18:13.218 "is_configured": true, 00:18:13.218 "data_offset": 0, 00:18:13.218 "data_size": 65536 00:18:13.218 } 00:18:13.218 ] 00:18:13.218 }' 00:18:13.218 13:38:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:13.218 13:38:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:13.786 13:38:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:13.786 13:38:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:18:13.786 13:38:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.786 13:38:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:13.786 [2024-11-20 13:38:13.017394] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:13.787 13:38:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.787 13:38:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:18:13.787 13:38:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:13.787 13:38:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.787 13:38:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:13.787 13:38:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:18:13.787 13:38:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.787 13:38:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:18:13.787 13:38:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:18:13.787 13:38:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:18:13.787 13:38:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:18:13.787 13:38:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.787 13:38:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:13.787 [2024-11-20 13:38:13.096955] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:13.787 13:38:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.787 13:38:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:13.787 13:38:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:13.787 13:38:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:13.787 13:38:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:13.787 13:38:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:13.787 13:38:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:13.787 13:38:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:13.787 13:38:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:13.787 13:38:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:13.787 13:38:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:13.787 13:38:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:13.787 13:38:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.787 13:38:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:13.787 13:38:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:13.787 13:38:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.787 13:38:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:13.787 "name": "raid_bdev1", 00:18:13.787 "uuid": "15176db3-186e-4d17-97dd-e35e44f5facf", 00:18:13.787 "strip_size_kb": 0, 00:18:13.787 "state": "online", 00:18:13.787 "raid_level": "raid1", 00:18:13.787 "superblock": false, 00:18:13.787 "num_base_bdevs": 2, 00:18:13.787 "num_base_bdevs_discovered": 1, 00:18:13.787 "num_base_bdevs_operational": 1, 00:18:13.787 "base_bdevs_list": [ 00:18:13.787 { 00:18:13.787 "name": null, 00:18:13.787 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:13.787 "is_configured": false, 00:18:13.787 "data_offset": 0, 00:18:13.787 "data_size": 65536 00:18:13.787 }, 00:18:13.787 { 00:18:13.787 "name": "BaseBdev2", 00:18:13.787 "uuid": "2ee254a8-8396-56a5-9883-06a868883c9a", 00:18:13.787 "is_configured": true, 00:18:13.787 "data_offset": 0, 00:18:13.787 "data_size": 65536 00:18:13.787 } 00:18:13.787 ] 00:18:13.787 }' 00:18:13.787 13:38:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:13.787 13:38:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:13.787 [2024-11-20 13:38:13.205670] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:18:13.787 I/O size of 3145728 is greater than zero copy threshold (65536). 00:18:13.787 Zero copy mechanism will not be used. 00:18:13.787 Running I/O for 60 seconds... 00:18:14.045 13:38:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:14.045 13:38:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.045 13:38:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:14.045 [2024-11-20 13:38:13.490241] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:14.304 13:38:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.304 13:38:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:18:14.304 [2024-11-20 13:38:13.555548] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:18:14.304 [2024-11-20 13:38:13.557833] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:14.304 [2024-11-20 13:38:13.665393] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:18:14.304 [2024-11-20 13:38:13.665888] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:18:14.564 [2024-11-20 13:38:13.794142] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:18:14.564 [2024-11-20 13:38:13.794462] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:18:14.823 [2024-11-20 13:38:14.133091] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:18:14.823 [2024-11-20 13:38:14.133623] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:18:15.082 153.00 IOPS, 459.00 MiB/s [2024-11-20T13:38:14.567Z] [2024-11-20 13:38:14.343643] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:18:15.082 [2024-11-20 13:38:14.343966] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:18:15.082 13:38:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:15.082 13:38:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:15.082 13:38:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:15.082 13:38:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:15.082 13:38:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:15.082 13:38:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:15.082 13:38:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:15.082 13:38:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.082 13:38:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:15.341 13:38:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.341 13:38:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:15.341 "name": "raid_bdev1", 00:18:15.341 "uuid": "15176db3-186e-4d17-97dd-e35e44f5facf", 00:18:15.341 "strip_size_kb": 0, 00:18:15.341 "state": "online", 00:18:15.341 "raid_level": "raid1", 00:18:15.341 "superblock": false, 00:18:15.341 "num_base_bdevs": 2, 00:18:15.341 "num_base_bdevs_discovered": 2, 00:18:15.341 "num_base_bdevs_operational": 2, 00:18:15.341 "process": { 00:18:15.341 "type": "rebuild", 00:18:15.341 "target": "spare", 00:18:15.341 "progress": { 00:18:15.341 "blocks": 10240, 00:18:15.341 "percent": 15 00:18:15.341 } 00:18:15.341 }, 00:18:15.341 "base_bdevs_list": [ 00:18:15.341 { 00:18:15.341 "name": "spare", 00:18:15.341 "uuid": "ca7cd451-9ea2-586b-9fb7-96101871bc6e", 00:18:15.341 "is_configured": true, 00:18:15.341 "data_offset": 0, 00:18:15.341 "data_size": 65536 00:18:15.341 }, 00:18:15.341 { 00:18:15.341 "name": "BaseBdev2", 00:18:15.341 "uuid": "2ee254a8-8396-56a5-9883-06a868883c9a", 00:18:15.341 "is_configured": true, 00:18:15.341 "data_offset": 0, 00:18:15.341 "data_size": 65536 00:18:15.341 } 00:18:15.341 ] 00:18:15.341 }' 00:18:15.341 13:38:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:15.341 13:38:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:15.341 13:38:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:15.341 13:38:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:15.341 13:38:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:18:15.341 13:38:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.341 13:38:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:15.341 [2024-11-20 13:38:14.670148] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:15.341 [2024-11-20 13:38:14.692562] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:15.341 [2024-11-20 13:38:14.700326] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:15.341 [2024-11-20 13:38:14.700365] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:15.341 [2024-11-20 13:38:14.700381] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:15.341 [2024-11-20 13:38:14.745926] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:18:15.341 13:38:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.341 13:38:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:15.341 13:38:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:15.341 13:38:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:15.341 13:38:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:15.341 13:38:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:15.341 13:38:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:15.341 13:38:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:15.341 13:38:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:15.342 13:38:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:15.342 13:38:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:15.342 13:38:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:15.342 13:38:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:15.342 13:38:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.342 13:38:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:15.342 13:38:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.342 13:38:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:15.342 "name": "raid_bdev1", 00:18:15.342 "uuid": "15176db3-186e-4d17-97dd-e35e44f5facf", 00:18:15.342 "strip_size_kb": 0, 00:18:15.342 "state": "online", 00:18:15.342 "raid_level": "raid1", 00:18:15.342 "superblock": false, 00:18:15.342 "num_base_bdevs": 2, 00:18:15.342 "num_base_bdevs_discovered": 1, 00:18:15.342 "num_base_bdevs_operational": 1, 00:18:15.342 "base_bdevs_list": [ 00:18:15.342 { 00:18:15.342 "name": null, 00:18:15.342 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:15.342 "is_configured": false, 00:18:15.342 "data_offset": 0, 00:18:15.342 "data_size": 65536 00:18:15.342 }, 00:18:15.342 { 00:18:15.342 "name": "BaseBdev2", 00:18:15.342 "uuid": "2ee254a8-8396-56a5-9883-06a868883c9a", 00:18:15.342 "is_configured": true, 00:18:15.342 "data_offset": 0, 00:18:15.342 "data_size": 65536 00:18:15.342 } 00:18:15.342 ] 00:18:15.342 }' 00:18:15.342 13:38:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:15.342 13:38:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:15.911 157.50 IOPS, 472.50 MiB/s [2024-11-20T13:38:15.396Z] 13:38:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:15.911 13:38:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:15.911 13:38:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:15.911 13:38:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:15.911 13:38:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:15.911 13:38:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:15.911 13:38:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:15.911 13:38:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.911 13:38:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:15.911 13:38:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.911 13:38:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:15.911 "name": "raid_bdev1", 00:18:15.911 "uuid": "15176db3-186e-4d17-97dd-e35e44f5facf", 00:18:15.911 "strip_size_kb": 0, 00:18:15.911 "state": "online", 00:18:15.911 "raid_level": "raid1", 00:18:15.911 "superblock": false, 00:18:15.911 "num_base_bdevs": 2, 00:18:15.911 "num_base_bdevs_discovered": 1, 00:18:15.911 "num_base_bdevs_operational": 1, 00:18:15.911 "base_bdevs_list": [ 00:18:15.911 { 00:18:15.911 "name": null, 00:18:15.911 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:15.911 "is_configured": false, 00:18:15.911 "data_offset": 0, 00:18:15.911 "data_size": 65536 00:18:15.911 }, 00:18:15.911 { 00:18:15.911 "name": "BaseBdev2", 00:18:15.911 "uuid": "2ee254a8-8396-56a5-9883-06a868883c9a", 00:18:15.911 "is_configured": true, 00:18:15.911 "data_offset": 0, 00:18:15.911 "data_size": 65536 00:18:15.911 } 00:18:15.911 ] 00:18:15.911 }' 00:18:15.911 13:38:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:15.911 13:38:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:15.912 13:38:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:15.912 13:38:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:15.912 13:38:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:15.912 13:38:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.912 13:38:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:15.912 [2024-11-20 13:38:15.365809] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:16.170 13:38:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.170 [2024-11-20 13:38:15.400977] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:18:16.170 13:38:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:18:16.170 [2024-11-20 13:38:15.403274] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:16.170 [2024-11-20 13:38:15.523038] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:18:16.170 [2024-11-20 13:38:15.523605] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:18:16.428 [2024-11-20 13:38:15.726352] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:18:16.428 [2024-11-20 13:38:15.726655] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:18:16.993 [2024-11-20 13:38:16.177320] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:18:16.993 [2024-11-20 13:38:16.177605] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:18:16.993 141.67 IOPS, 425.00 MiB/s [2024-11-20T13:38:16.478Z] 13:38:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:16.993 13:38:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:16.993 13:38:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:16.993 13:38:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:16.993 13:38:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:16.993 13:38:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:16.993 13:38:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:16.993 13:38:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.993 13:38:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:16.993 13:38:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.993 13:38:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:16.993 "name": "raid_bdev1", 00:18:16.993 "uuid": "15176db3-186e-4d17-97dd-e35e44f5facf", 00:18:16.993 "strip_size_kb": 0, 00:18:16.993 "state": "online", 00:18:16.993 "raid_level": "raid1", 00:18:16.993 "superblock": false, 00:18:16.993 "num_base_bdevs": 2, 00:18:16.993 "num_base_bdevs_discovered": 2, 00:18:16.993 "num_base_bdevs_operational": 2, 00:18:16.993 "process": { 00:18:16.993 "type": "rebuild", 00:18:16.993 "target": "spare", 00:18:16.993 "progress": { 00:18:16.993 "blocks": 12288, 00:18:16.993 "percent": 18 00:18:16.993 } 00:18:16.993 }, 00:18:16.993 "base_bdevs_list": [ 00:18:16.993 { 00:18:16.993 "name": "spare", 00:18:16.993 "uuid": "ca7cd451-9ea2-586b-9fb7-96101871bc6e", 00:18:16.993 "is_configured": true, 00:18:16.993 "data_offset": 0, 00:18:16.994 "data_size": 65536 00:18:16.994 }, 00:18:16.994 { 00:18:16.994 "name": "BaseBdev2", 00:18:16.994 "uuid": "2ee254a8-8396-56a5-9883-06a868883c9a", 00:18:16.994 "is_configured": true, 00:18:16.994 "data_offset": 0, 00:18:16.994 "data_size": 65536 00:18:16.994 } 00:18:16.994 ] 00:18:16.994 }' 00:18:16.994 13:38:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:17.252 13:38:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:17.252 13:38:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:17.252 13:38:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:17.252 13:38:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:18:17.252 13:38:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:18:17.252 13:38:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:18:17.252 13:38:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:18:17.252 13:38:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=404 00:18:17.252 13:38:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:17.252 13:38:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:17.252 13:38:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:17.252 13:38:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:17.252 13:38:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:17.252 13:38:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:17.252 13:38:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:17.252 13:38:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.252 13:38:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:17.252 13:38:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:17.252 13:38:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.252 13:38:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:17.252 "name": "raid_bdev1", 00:18:17.252 "uuid": "15176db3-186e-4d17-97dd-e35e44f5facf", 00:18:17.252 "strip_size_kb": 0, 00:18:17.252 "state": "online", 00:18:17.252 "raid_level": "raid1", 00:18:17.252 "superblock": false, 00:18:17.252 "num_base_bdevs": 2, 00:18:17.252 "num_base_bdevs_discovered": 2, 00:18:17.252 "num_base_bdevs_operational": 2, 00:18:17.252 "process": { 00:18:17.252 "type": "rebuild", 00:18:17.252 "target": "spare", 00:18:17.252 "progress": { 00:18:17.252 "blocks": 14336, 00:18:17.252 "percent": 21 00:18:17.252 } 00:18:17.252 }, 00:18:17.252 "base_bdevs_list": [ 00:18:17.252 { 00:18:17.252 "name": "spare", 00:18:17.252 "uuid": "ca7cd451-9ea2-586b-9fb7-96101871bc6e", 00:18:17.252 "is_configured": true, 00:18:17.252 "data_offset": 0, 00:18:17.252 "data_size": 65536 00:18:17.252 }, 00:18:17.252 { 00:18:17.252 "name": "BaseBdev2", 00:18:17.252 "uuid": "2ee254a8-8396-56a5-9883-06a868883c9a", 00:18:17.252 "is_configured": true, 00:18:17.252 "data_offset": 0, 00:18:17.252 "data_size": 65536 00:18:17.252 } 00:18:17.252 ] 00:18:17.252 }' 00:18:17.252 13:38:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:17.252 [2024-11-20 13:38:16.610850] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:18:17.252 13:38:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:17.252 13:38:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:17.252 13:38:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:17.252 13:38:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:17.510 [2024-11-20 13:38:16.819666] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:18:17.510 [2024-11-20 13:38:16.920958] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:18:17.768 124.75 IOPS, 374.25 MiB/s [2024-11-20T13:38:17.253Z] [2024-11-20 13:38:17.237355] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:18:18.334 [2024-11-20 13:38:17.569623] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:18:18.334 13:38:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:18.334 13:38:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:18.334 13:38:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:18.334 13:38:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:18.334 13:38:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:18.334 13:38:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:18.335 13:38:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:18.335 13:38:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.335 13:38:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:18.335 13:38:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:18.335 [2024-11-20 13:38:17.691332] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:18:18.335 [2024-11-20 13:38:17.691610] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:18:18.335 13:38:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.335 13:38:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:18.335 "name": "raid_bdev1", 00:18:18.335 "uuid": "15176db3-186e-4d17-97dd-e35e44f5facf", 00:18:18.335 "strip_size_kb": 0, 00:18:18.335 "state": "online", 00:18:18.335 "raid_level": "raid1", 00:18:18.335 "superblock": false, 00:18:18.335 "num_base_bdevs": 2, 00:18:18.335 "num_base_bdevs_discovered": 2, 00:18:18.335 "num_base_bdevs_operational": 2, 00:18:18.335 "process": { 00:18:18.335 "type": "rebuild", 00:18:18.335 "target": "spare", 00:18:18.335 "progress": { 00:18:18.335 "blocks": 34816, 00:18:18.335 "percent": 53 00:18:18.335 } 00:18:18.335 }, 00:18:18.335 "base_bdevs_list": [ 00:18:18.335 { 00:18:18.335 "name": "spare", 00:18:18.335 "uuid": "ca7cd451-9ea2-586b-9fb7-96101871bc6e", 00:18:18.335 "is_configured": true, 00:18:18.335 "data_offset": 0, 00:18:18.335 "data_size": 65536 00:18:18.335 }, 00:18:18.335 { 00:18:18.335 "name": "BaseBdev2", 00:18:18.335 "uuid": "2ee254a8-8396-56a5-9883-06a868883c9a", 00:18:18.335 "is_configured": true, 00:18:18.335 "data_offset": 0, 00:18:18.335 "data_size": 65536 00:18:18.335 } 00:18:18.335 ] 00:18:18.335 }' 00:18:18.335 13:38:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:18.335 13:38:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:18.335 13:38:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:18.606 13:38:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:18.607 13:38:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:19.480 107.40 IOPS, 322.20 MiB/s [2024-11-20T13:38:18.965Z] 13:38:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:19.480 13:38:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:19.480 13:38:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:19.480 13:38:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:19.480 13:38:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:19.480 13:38:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:19.480 13:38:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:19.480 13:38:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:19.480 13:38:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.480 13:38:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:19.480 13:38:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.480 13:38:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:19.480 "name": "raid_bdev1", 00:18:19.480 "uuid": "15176db3-186e-4d17-97dd-e35e44f5facf", 00:18:19.480 "strip_size_kb": 0, 00:18:19.480 "state": "online", 00:18:19.480 "raid_level": "raid1", 00:18:19.480 "superblock": false, 00:18:19.480 "num_base_bdevs": 2, 00:18:19.480 "num_base_bdevs_discovered": 2, 00:18:19.480 "num_base_bdevs_operational": 2, 00:18:19.480 "process": { 00:18:19.480 "type": "rebuild", 00:18:19.480 "target": "spare", 00:18:19.480 "progress": { 00:18:19.480 "blocks": 53248, 00:18:19.480 "percent": 81 00:18:19.480 } 00:18:19.480 }, 00:18:19.480 "base_bdevs_list": [ 00:18:19.480 { 00:18:19.480 "name": "spare", 00:18:19.480 "uuid": "ca7cd451-9ea2-586b-9fb7-96101871bc6e", 00:18:19.480 "is_configured": true, 00:18:19.480 "data_offset": 0, 00:18:19.480 "data_size": 65536 00:18:19.480 }, 00:18:19.480 { 00:18:19.480 "name": "BaseBdev2", 00:18:19.480 "uuid": "2ee254a8-8396-56a5-9883-06a868883c9a", 00:18:19.480 "is_configured": true, 00:18:19.480 "data_offset": 0, 00:18:19.480 "data_size": 65536 00:18:19.480 } 00:18:19.480 ] 00:18:19.480 }' 00:18:19.480 13:38:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:19.480 13:38:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:19.480 13:38:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:19.738 13:38:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:19.738 13:38:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:19.738 [2024-11-20 13:38:19.007765] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:18:19.738 94.67 IOPS, 284.00 MiB/s [2024-11-20T13:38:19.223Z] [2024-11-20 13:38:19.210381] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:18:20.329 [2024-11-20 13:38:19.642907] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:18:20.329 [2024-11-20 13:38:19.748891] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:18:20.329 [2024-11-20 13:38:19.751787] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:20.588 13:38:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:20.588 13:38:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:20.588 13:38:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:20.588 13:38:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:20.588 13:38:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:20.588 13:38:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:20.588 13:38:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:20.588 13:38:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.588 13:38:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:20.588 13:38:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:20.588 13:38:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.588 13:38:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:20.588 "name": "raid_bdev1", 00:18:20.588 "uuid": "15176db3-186e-4d17-97dd-e35e44f5facf", 00:18:20.588 "strip_size_kb": 0, 00:18:20.588 "state": "online", 00:18:20.588 "raid_level": "raid1", 00:18:20.588 "superblock": false, 00:18:20.588 "num_base_bdevs": 2, 00:18:20.588 "num_base_bdevs_discovered": 2, 00:18:20.588 "num_base_bdevs_operational": 2, 00:18:20.588 "base_bdevs_list": [ 00:18:20.588 { 00:18:20.588 "name": "spare", 00:18:20.588 "uuid": "ca7cd451-9ea2-586b-9fb7-96101871bc6e", 00:18:20.588 "is_configured": true, 00:18:20.588 "data_offset": 0, 00:18:20.588 "data_size": 65536 00:18:20.588 }, 00:18:20.588 { 00:18:20.588 "name": "BaseBdev2", 00:18:20.588 "uuid": "2ee254a8-8396-56a5-9883-06a868883c9a", 00:18:20.588 "is_configured": true, 00:18:20.588 "data_offset": 0, 00:18:20.588 "data_size": 65536 00:18:20.588 } 00:18:20.588 ] 00:18:20.588 }' 00:18:20.588 13:38:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:20.588 13:38:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:18:20.588 13:38:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:20.848 13:38:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:18:20.848 13:38:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 00:18:20.848 13:38:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:20.848 13:38:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:20.848 13:38:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:20.848 13:38:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:20.848 13:38:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:20.848 13:38:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:20.848 13:38:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:20.848 13:38:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.848 13:38:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:20.848 13:38:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.848 13:38:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:20.848 "name": "raid_bdev1", 00:18:20.848 "uuid": "15176db3-186e-4d17-97dd-e35e44f5facf", 00:18:20.848 "strip_size_kb": 0, 00:18:20.848 "state": "online", 00:18:20.848 "raid_level": "raid1", 00:18:20.848 "superblock": false, 00:18:20.848 "num_base_bdevs": 2, 00:18:20.848 "num_base_bdevs_discovered": 2, 00:18:20.848 "num_base_bdevs_operational": 2, 00:18:20.848 "base_bdevs_list": [ 00:18:20.848 { 00:18:20.848 "name": "spare", 00:18:20.848 "uuid": "ca7cd451-9ea2-586b-9fb7-96101871bc6e", 00:18:20.848 "is_configured": true, 00:18:20.848 "data_offset": 0, 00:18:20.848 "data_size": 65536 00:18:20.848 }, 00:18:20.848 { 00:18:20.848 "name": "BaseBdev2", 00:18:20.848 "uuid": "2ee254a8-8396-56a5-9883-06a868883c9a", 00:18:20.848 "is_configured": true, 00:18:20.848 "data_offset": 0, 00:18:20.848 "data_size": 65536 00:18:20.848 } 00:18:20.848 ] 00:18:20.848 }' 00:18:20.848 13:38:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:20.848 13:38:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:20.848 85.14 IOPS, 255.43 MiB/s [2024-11-20T13:38:20.333Z] 13:38:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:20.848 13:38:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:20.848 13:38:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:20.848 13:38:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:20.848 13:38:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:20.848 13:38:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:20.848 13:38:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:20.848 13:38:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:20.848 13:38:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:20.848 13:38:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:20.848 13:38:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:20.848 13:38:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:20.848 13:38:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:20.848 13:38:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:20.848 13:38:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.848 13:38:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:20.848 13:38:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.848 13:38:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:20.848 "name": "raid_bdev1", 00:18:20.848 "uuid": "15176db3-186e-4d17-97dd-e35e44f5facf", 00:18:20.848 "strip_size_kb": 0, 00:18:20.848 "state": "online", 00:18:20.848 "raid_level": "raid1", 00:18:20.848 "superblock": false, 00:18:20.848 "num_base_bdevs": 2, 00:18:20.848 "num_base_bdevs_discovered": 2, 00:18:20.848 "num_base_bdevs_operational": 2, 00:18:20.848 "base_bdevs_list": [ 00:18:20.848 { 00:18:20.848 "name": "spare", 00:18:20.848 "uuid": "ca7cd451-9ea2-586b-9fb7-96101871bc6e", 00:18:20.848 "is_configured": true, 00:18:20.848 "data_offset": 0, 00:18:20.848 "data_size": 65536 00:18:20.848 }, 00:18:20.848 { 00:18:20.848 "name": "BaseBdev2", 00:18:20.848 "uuid": "2ee254a8-8396-56a5-9883-06a868883c9a", 00:18:20.848 "is_configured": true, 00:18:20.848 "data_offset": 0, 00:18:20.848 "data_size": 65536 00:18:20.848 } 00:18:20.848 ] 00:18:20.848 }' 00:18:20.848 13:38:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:20.848 13:38:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:21.417 13:38:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:21.417 13:38:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.417 13:38:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:21.417 [2024-11-20 13:38:20.630752] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:21.417 [2024-11-20 13:38:20.630794] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:21.417 00:18:21.417 Latency(us) 00:18:21.417 [2024-11-20T13:38:20.902Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:21.417 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:18:21.417 raid_bdev1 : 7.53 81.09 243.28 0.00 0.00 16198.31 309.26 109489.86 00:18:21.417 [2024-11-20T13:38:20.902Z] =================================================================================================================== 00:18:21.417 [2024-11-20T13:38:20.902Z] Total : 81.09 243.28 0.00 0.00 16198.31 309.26 109489.86 00:18:21.417 [2024-11-20 13:38:20.754806] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:21.417 [2024-11-20 13:38:20.754900] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:21.417 [2024-11-20 13:38:20.754983] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:21.417 [2024-11-20 13:38:20.754999] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:18:21.417 { 00:18:21.417 "results": [ 00:18:21.417 { 00:18:21.417 "job": "raid_bdev1", 00:18:21.417 "core_mask": "0x1", 00:18:21.417 "workload": "randrw", 00:18:21.417 "percentage": 50, 00:18:21.417 "status": "finished", 00:18:21.417 "queue_depth": 2, 00:18:21.417 "io_size": 3145728, 00:18:21.417 "runtime": 7.534618, 00:18:21.417 "iops": 81.09236593016395, 00:18:21.417 "mibps": 243.27709779049184, 00:18:21.417 "io_failed": 0, 00:18:21.417 "io_timeout": 0, 00:18:21.417 "avg_latency_us": 16198.312459001309, 00:18:21.417 "min_latency_us": 309.2562248995984, 00:18:21.417 "max_latency_us": 109489.86345381525 00:18:21.417 } 00:18:21.417 ], 00:18:21.417 "core_count": 1 00:18:21.417 } 00:18:21.417 13:38:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.417 13:38:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:21.417 13:38:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.417 13:38:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 00:18:21.417 13:38:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:21.417 13:38:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.417 13:38:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:18:21.417 13:38:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:18:21.417 13:38:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:18:21.417 13:38:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:18:21.417 13:38:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:18:21.417 13:38:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:18:21.417 13:38:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:21.417 13:38:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:18:21.417 13:38:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:21.417 13:38:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:18:21.417 13:38:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:21.417 13:38:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:21.417 13:38:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:18:21.677 /dev/nbd0 00:18:21.677 13:38:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:21.677 13:38:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:21.677 13:38:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:18:21.677 13:38:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:18:21.677 13:38:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:21.677 13:38:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:21.677 13:38:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:18:21.677 13:38:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:18:21.677 13:38:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:21.677 13:38:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:21.677 13:38:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:21.677 1+0 records in 00:18:21.677 1+0 records out 00:18:21.677 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000469127 s, 8.7 MB/s 00:18:21.677 13:38:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:21.677 13:38:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:18:21.677 13:38:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:21.677 13:38:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:21.677 13:38:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:18:21.677 13:38:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:21.677 13:38:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:21.677 13:38:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:18:21.677 13:38:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 00:18:21.677 13:38:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 00:18:21.677 13:38:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:18:21.677 13:38:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:18:21.677 13:38:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:21.677 13:38:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:18:21.677 13:38:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:21.677 13:38:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:18:21.677 13:38:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:21.677 13:38:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:21.677 13:38:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:18:21.936 /dev/nbd1 00:18:21.936 13:38:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:18:21.936 13:38:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:18:21.936 13:38:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:18:21.936 13:38:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:18:21.936 13:38:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:21.936 13:38:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:21.936 13:38:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:18:21.936 13:38:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:18:21.936 13:38:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:21.936 13:38:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:21.936 13:38:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:21.936 1+0 records in 00:18:21.936 1+0 records out 00:18:21.936 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000535299 s, 7.7 MB/s 00:18:21.936 13:38:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:21.936 13:38:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:18:21.936 13:38:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:21.936 13:38:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:21.936 13:38:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:18:21.936 13:38:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:21.936 13:38:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:21.936 13:38:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:18:22.195 13:38:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:18:22.195 13:38:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:18:22.195 13:38:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:18:22.195 13:38:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:22.195 13:38:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:18:22.195 13:38:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:22.195 13:38:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:18:22.455 13:38:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:18:22.455 13:38:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:18:22.455 13:38:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:18:22.455 13:38:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:22.455 13:38:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:22.455 13:38:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:18:22.455 13:38:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:18:22.455 13:38:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:18:22.455 13:38:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:18:22.455 13:38:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:18:22.455 13:38:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:18:22.455 13:38:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:22.455 13:38:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:18:22.455 13:38:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:22.455 13:38:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:18:22.714 13:38:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:22.714 13:38:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:22.714 13:38:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:22.714 13:38:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:22.714 13:38:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:22.714 13:38:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:22.714 13:38:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:18:22.714 13:38:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:18:22.714 13:38:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:18:22.714 13:38:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 76232 00:18:22.714 13:38:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # '[' -z 76232 ']' 00:18:22.714 13:38:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@958 -- # kill -0 76232 00:18:22.714 13:38:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # uname 00:18:22.714 13:38:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:22.714 13:38:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76232 00:18:22.714 13:38:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:22.714 13:38:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:22.714 killing process with pid 76232 00:18:22.714 13:38:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76232' 00:18:22.714 13:38:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@973 -- # kill 76232 00:18:22.714 Received shutdown signal, test time was about 8.967080 seconds 00:18:22.714 00:18:22.714 Latency(us) 00:18:22.714 [2024-11-20T13:38:22.199Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:22.714 [2024-11-20T13:38:22.199Z] =================================================================================================================== 00:18:22.714 [2024-11-20T13:38:22.199Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:22.714 [2024-11-20 13:38:22.160664] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:22.714 13:38:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@978 -- # wait 76232 00:18:22.973 [2024-11-20 13:38:22.416026] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:24.351 13:38:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 00:18:24.351 00:18:24.351 real 0m12.179s 00:18:24.351 user 0m15.202s 00:18:24.351 sys 0m1.699s 00:18:24.351 13:38:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:24.351 13:38:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:24.351 ************************************ 00:18:24.351 END TEST raid_rebuild_test_io 00:18:24.351 ************************************ 00:18:24.351 13:38:23 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 2 true true true 00:18:24.351 13:38:23 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:18:24.351 13:38:23 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:24.351 13:38:23 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:24.351 ************************************ 00:18:24.351 START TEST raid_rebuild_test_sb_io 00:18:24.351 ************************************ 00:18:24.351 13:38:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true true true 00:18:24.351 13:38:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:18:24.351 13:38:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:18:24.351 13:38:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:18:24.351 13:38:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:18:24.351 13:38:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:18:24.351 13:38:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:18:24.351 13:38:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:24.351 13:38:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:18:24.351 13:38:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:24.351 13:38:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:24.351 13:38:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:18:24.351 13:38:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:24.351 13:38:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:24.351 13:38:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:18:24.351 13:38:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:18:24.351 13:38:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:18:24.351 13:38:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:18:24.351 13:38:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:18:24.351 13:38:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:18:24.351 13:38:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:18:24.351 13:38:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:18:24.351 13:38:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:18:24.351 13:38:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:18:24.352 13:38:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:18:24.352 13:38:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=76623 00:18:24.352 13:38:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 76623 00:18:24.352 13:38:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:18:24.352 13:38:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@835 -- # '[' -z 76623 ']' 00:18:24.352 13:38:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:24.352 13:38:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:24.352 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:24.352 13:38:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:24.352 13:38:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:24.352 13:38:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:24.352 I/O size of 3145728 is greater than zero copy threshold (65536). 00:18:24.352 Zero copy mechanism will not be used. 00:18:24.352 [2024-11-20 13:38:23.818781] Starting SPDK v25.01-pre git sha1 82b85d9ca / DPDK 24.03.0 initialization... 00:18:24.352 [2024-11-20 13:38:23.818918] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76623 ] 00:18:24.609 [2024-11-20 13:38:23.999415] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:24.868 [2024-11-20 13:38:24.118827] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:24.868 [2024-11-20 13:38:24.326547] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:24.868 [2024-11-20 13:38:24.326596] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:25.436 13:38:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:25.436 13:38:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # return 0 00:18:25.436 13:38:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:25.436 13:38:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:18:25.436 13:38:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:25.436 13:38:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:25.436 BaseBdev1_malloc 00:18:25.436 13:38:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.436 13:38:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:25.436 13:38:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:25.436 13:38:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:25.436 [2024-11-20 13:38:24.706373] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:25.436 [2024-11-20 13:38:24.706438] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:25.436 [2024-11-20 13:38:24.706463] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:18:25.436 [2024-11-20 13:38:24.706477] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:25.436 [2024-11-20 13:38:24.709036] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:25.436 [2024-11-20 13:38:24.709092] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:25.436 BaseBdev1 00:18:25.436 13:38:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.436 13:38:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:25.436 13:38:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:18:25.436 13:38:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:25.436 13:38:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:25.436 BaseBdev2_malloc 00:18:25.436 13:38:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.436 13:38:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:18:25.436 13:38:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:25.436 13:38:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:25.436 [2024-11-20 13:38:24.764959] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:18:25.436 [2024-11-20 13:38:24.765026] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:25.436 [2024-11-20 13:38:24.765051] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:18:25.436 [2024-11-20 13:38:24.765075] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:25.436 [2024-11-20 13:38:24.767609] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:25.436 [2024-11-20 13:38:24.767649] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:18:25.436 BaseBdev2 00:18:25.436 13:38:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.436 13:38:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:18:25.437 13:38:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:25.437 13:38:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:25.437 spare_malloc 00:18:25.437 13:38:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.437 13:38:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:18:25.437 13:38:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:25.437 13:38:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:25.437 spare_delay 00:18:25.437 13:38:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.437 13:38:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:25.437 13:38:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:25.437 13:38:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:25.437 [2024-11-20 13:38:24.838275] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:25.437 [2024-11-20 13:38:24.838339] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:25.437 [2024-11-20 13:38:24.838361] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:18:25.437 [2024-11-20 13:38:24.838376] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:25.437 [2024-11-20 13:38:24.840860] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:25.437 [2024-11-20 13:38:24.840905] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:25.437 spare 00:18:25.437 13:38:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.437 13:38:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:18:25.437 13:38:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:25.437 13:38:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:25.437 [2024-11-20 13:38:24.846326] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:25.437 [2024-11-20 13:38:24.848529] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:25.437 [2024-11-20 13:38:24.848710] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:18:25.437 [2024-11-20 13:38:24.848734] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:18:25.437 [2024-11-20 13:38:24.849006] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:18:25.437 [2024-11-20 13:38:24.849190] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:18:25.437 [2024-11-20 13:38:24.849201] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:18:25.437 [2024-11-20 13:38:24.849355] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:25.437 13:38:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.437 13:38:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:25.437 13:38:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:25.437 13:38:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:25.437 13:38:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:25.437 13:38:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:25.437 13:38:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:25.437 13:38:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:25.437 13:38:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:25.437 13:38:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:25.437 13:38:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:25.437 13:38:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:25.437 13:38:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:25.437 13:38:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:25.437 13:38:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:25.437 13:38:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.437 13:38:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:25.437 "name": "raid_bdev1", 00:18:25.437 "uuid": "d481f70d-6d38-4ec0-b17a-a2a6d97b8c2d", 00:18:25.437 "strip_size_kb": 0, 00:18:25.437 "state": "online", 00:18:25.437 "raid_level": "raid1", 00:18:25.437 "superblock": true, 00:18:25.437 "num_base_bdevs": 2, 00:18:25.437 "num_base_bdevs_discovered": 2, 00:18:25.437 "num_base_bdevs_operational": 2, 00:18:25.437 "base_bdevs_list": [ 00:18:25.437 { 00:18:25.437 "name": "BaseBdev1", 00:18:25.437 "uuid": "ca3fee15-64aa-5824-a3d7-fbcadf24867f", 00:18:25.437 "is_configured": true, 00:18:25.437 "data_offset": 2048, 00:18:25.437 "data_size": 63488 00:18:25.437 }, 00:18:25.437 { 00:18:25.437 "name": "BaseBdev2", 00:18:25.437 "uuid": "e58cedf7-4085-5fce-bf89-cf523a9e374c", 00:18:25.437 "is_configured": true, 00:18:25.437 "data_offset": 2048, 00:18:25.437 "data_size": 63488 00:18:25.437 } 00:18:25.437 ] 00:18:25.437 }' 00:18:25.437 13:38:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:25.437 13:38:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:26.004 13:38:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:26.004 13:38:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.004 13:38:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:18:26.004 13:38:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:26.004 [2024-11-20 13:38:25.250230] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:26.004 13:38:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.004 13:38:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:18:26.004 13:38:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:18:26.004 13:38:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:26.004 13:38:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.004 13:38:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:26.004 13:38:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.004 13:38:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:18:26.004 13:38:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:18:26.004 13:38:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:18:26.004 13:38:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:18:26.004 13:38:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.004 13:38:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:26.004 [2024-11-20 13:38:25.345656] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:26.004 13:38:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.004 13:38:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:26.004 13:38:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:26.004 13:38:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:26.004 13:38:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:26.004 13:38:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:26.004 13:38:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:26.004 13:38:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:26.004 13:38:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:26.004 13:38:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:26.004 13:38:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:26.004 13:38:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:26.004 13:38:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:26.004 13:38:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.004 13:38:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:26.004 13:38:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.004 13:38:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:26.004 "name": "raid_bdev1", 00:18:26.004 "uuid": "d481f70d-6d38-4ec0-b17a-a2a6d97b8c2d", 00:18:26.004 "strip_size_kb": 0, 00:18:26.004 "state": "online", 00:18:26.004 "raid_level": "raid1", 00:18:26.004 "superblock": true, 00:18:26.004 "num_base_bdevs": 2, 00:18:26.004 "num_base_bdevs_discovered": 1, 00:18:26.004 "num_base_bdevs_operational": 1, 00:18:26.004 "base_bdevs_list": [ 00:18:26.004 { 00:18:26.004 "name": null, 00:18:26.004 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:26.004 "is_configured": false, 00:18:26.004 "data_offset": 0, 00:18:26.004 "data_size": 63488 00:18:26.004 }, 00:18:26.004 { 00:18:26.004 "name": "BaseBdev2", 00:18:26.004 "uuid": "e58cedf7-4085-5fce-bf89-cf523a9e374c", 00:18:26.004 "is_configured": true, 00:18:26.004 "data_offset": 2048, 00:18:26.004 "data_size": 63488 00:18:26.004 } 00:18:26.004 ] 00:18:26.004 }' 00:18:26.004 13:38:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:26.004 13:38:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:26.004 [2024-11-20 13:38:25.442284] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:18:26.004 I/O size of 3145728 is greater than zero copy threshold (65536). 00:18:26.004 Zero copy mechanism will not be used. 00:18:26.004 Running I/O for 60 seconds... 00:18:26.581 13:38:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:26.581 13:38:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.581 13:38:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:26.581 [2024-11-20 13:38:25.802482] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:26.581 13:38:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.581 13:38:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:18:26.581 [2024-11-20 13:38:25.860341] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:18:26.581 [2024-11-20 13:38:25.862727] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:26.581 [2024-11-20 13:38:25.971394] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:18:26.581 [2024-11-20 13:38:25.972135] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:18:26.911 [2024-11-20 13:38:26.188068] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:18:26.911 [2024-11-20 13:38:26.188381] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:18:27.188 [2024-11-20 13:38:26.427687] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:18:27.188 [2024-11-20 13:38:26.428212] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:18:27.188 193.00 IOPS, 579.00 MiB/s [2024-11-20T13:38:26.673Z] [2024-11-20 13:38:26.644165] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:18:27.188 [2024-11-20 13:38:26.644479] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:18:27.447 13:38:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:27.447 13:38:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:27.447 13:38:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:27.447 13:38:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:27.447 13:38:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:27.447 13:38:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:27.447 13:38:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:27.447 13:38:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.447 13:38:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:27.447 13:38:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.447 13:38:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:27.447 "name": "raid_bdev1", 00:18:27.447 "uuid": "d481f70d-6d38-4ec0-b17a-a2a6d97b8c2d", 00:18:27.447 "strip_size_kb": 0, 00:18:27.447 "state": "online", 00:18:27.447 "raid_level": "raid1", 00:18:27.447 "superblock": true, 00:18:27.447 "num_base_bdevs": 2, 00:18:27.447 "num_base_bdevs_discovered": 2, 00:18:27.448 "num_base_bdevs_operational": 2, 00:18:27.448 "process": { 00:18:27.448 "type": "rebuild", 00:18:27.448 "target": "spare", 00:18:27.448 "progress": { 00:18:27.448 "blocks": 12288, 00:18:27.448 "percent": 19 00:18:27.448 } 00:18:27.448 }, 00:18:27.448 "base_bdevs_list": [ 00:18:27.448 { 00:18:27.448 "name": "spare", 00:18:27.448 "uuid": "559fa14a-88ff-58ba-8e15-e701564a0330", 00:18:27.448 "is_configured": true, 00:18:27.448 "data_offset": 2048, 00:18:27.448 "data_size": 63488 00:18:27.448 }, 00:18:27.448 { 00:18:27.448 "name": "BaseBdev2", 00:18:27.448 "uuid": "e58cedf7-4085-5fce-bf89-cf523a9e374c", 00:18:27.448 "is_configured": true, 00:18:27.448 "data_offset": 2048, 00:18:27.448 "data_size": 63488 00:18:27.448 } 00:18:27.448 ] 00:18:27.448 }' 00:18:27.448 13:38:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:27.707 13:38:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:27.707 13:38:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:27.707 13:38:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:27.707 13:38:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:18:27.707 13:38:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.707 13:38:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:27.707 [2024-11-20 13:38:27.005850] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:27.707 [2024-11-20 13:38:27.106047] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:27.707 [2024-11-20 13:38:27.120725] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:27.707 [2024-11-20 13:38:27.120963] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:27.707 [2024-11-20 13:38:27.120994] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:27.707 [2024-11-20 13:38:27.174396] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:18:27.707 13:38:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.707 13:38:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:27.707 13:38:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:27.707 13:38:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:27.707 13:38:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:27.707 13:38:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:27.707 13:38:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:27.707 13:38:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:27.707 13:38:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:27.707 13:38:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:27.707 13:38:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:27.967 13:38:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:27.967 13:38:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:27.967 13:38:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.967 13:38:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:27.967 13:38:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.967 13:38:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:27.967 "name": "raid_bdev1", 00:18:27.967 "uuid": "d481f70d-6d38-4ec0-b17a-a2a6d97b8c2d", 00:18:27.967 "strip_size_kb": 0, 00:18:27.967 "state": "online", 00:18:27.967 "raid_level": "raid1", 00:18:27.967 "superblock": true, 00:18:27.967 "num_base_bdevs": 2, 00:18:27.967 "num_base_bdevs_discovered": 1, 00:18:27.967 "num_base_bdevs_operational": 1, 00:18:27.967 "base_bdevs_list": [ 00:18:27.967 { 00:18:27.967 "name": null, 00:18:27.967 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:27.967 "is_configured": false, 00:18:27.967 "data_offset": 0, 00:18:27.967 "data_size": 63488 00:18:27.967 }, 00:18:27.967 { 00:18:27.967 "name": "BaseBdev2", 00:18:27.967 "uuid": "e58cedf7-4085-5fce-bf89-cf523a9e374c", 00:18:27.967 "is_configured": true, 00:18:27.967 "data_offset": 2048, 00:18:27.967 "data_size": 63488 00:18:27.967 } 00:18:27.967 ] 00:18:27.967 }' 00:18:27.967 13:38:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:27.967 13:38:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:28.225 159.50 IOPS, 478.50 MiB/s [2024-11-20T13:38:27.710Z] 13:38:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:28.225 13:38:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:28.225 13:38:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:28.225 13:38:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:28.225 13:38:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:28.225 13:38:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:28.225 13:38:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.225 13:38:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:28.225 13:38:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:28.225 13:38:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.225 13:38:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:28.225 "name": "raid_bdev1", 00:18:28.225 "uuid": "d481f70d-6d38-4ec0-b17a-a2a6d97b8c2d", 00:18:28.225 "strip_size_kb": 0, 00:18:28.225 "state": "online", 00:18:28.225 "raid_level": "raid1", 00:18:28.225 "superblock": true, 00:18:28.225 "num_base_bdevs": 2, 00:18:28.225 "num_base_bdevs_discovered": 1, 00:18:28.225 "num_base_bdevs_operational": 1, 00:18:28.225 "base_bdevs_list": [ 00:18:28.225 { 00:18:28.225 "name": null, 00:18:28.225 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:28.225 "is_configured": false, 00:18:28.225 "data_offset": 0, 00:18:28.225 "data_size": 63488 00:18:28.225 }, 00:18:28.225 { 00:18:28.225 "name": "BaseBdev2", 00:18:28.225 "uuid": "e58cedf7-4085-5fce-bf89-cf523a9e374c", 00:18:28.225 "is_configured": true, 00:18:28.226 "data_offset": 2048, 00:18:28.226 "data_size": 63488 00:18:28.226 } 00:18:28.226 ] 00:18:28.226 }' 00:18:28.226 13:38:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:28.484 13:38:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:28.484 13:38:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:28.484 13:38:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:28.484 13:38:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:28.484 13:38:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.484 13:38:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:28.484 [2024-11-20 13:38:27.790044] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:28.484 13:38:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.484 13:38:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:18:28.484 [2024-11-20 13:38:27.844396] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:18:28.484 [2024-11-20 13:38:27.846737] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:28.484 [2024-11-20 13:38:27.961326] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:18:28.484 [2024-11-20 13:38:27.961993] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:18:28.743 [2024-11-20 13:38:28.177932] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:18:28.743 [2024-11-20 13:38:28.178306] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:18:29.260 162.00 IOPS, 486.00 MiB/s [2024-11-20T13:38:28.745Z] [2024-11-20 13:38:28.530774] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:18:29.260 [2024-11-20 13:38:28.663906] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:18:29.520 13:38:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:29.520 13:38:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:29.520 13:38:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:29.520 13:38:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:29.520 13:38:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:29.520 13:38:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:29.520 13:38:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.520 13:38:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:29.520 13:38:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:29.520 13:38:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.520 13:38:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:29.520 "name": "raid_bdev1", 00:18:29.520 "uuid": "d481f70d-6d38-4ec0-b17a-a2a6d97b8c2d", 00:18:29.520 "strip_size_kb": 0, 00:18:29.520 "state": "online", 00:18:29.520 "raid_level": "raid1", 00:18:29.520 "superblock": true, 00:18:29.520 "num_base_bdevs": 2, 00:18:29.520 "num_base_bdevs_discovered": 2, 00:18:29.520 "num_base_bdevs_operational": 2, 00:18:29.520 "process": { 00:18:29.520 "type": "rebuild", 00:18:29.520 "target": "spare", 00:18:29.520 "progress": { 00:18:29.520 "blocks": 12288, 00:18:29.520 "percent": 19 00:18:29.520 } 00:18:29.520 }, 00:18:29.520 "base_bdevs_list": [ 00:18:29.520 { 00:18:29.520 "name": "spare", 00:18:29.520 "uuid": "559fa14a-88ff-58ba-8e15-e701564a0330", 00:18:29.520 "is_configured": true, 00:18:29.520 "data_offset": 2048, 00:18:29.520 "data_size": 63488 00:18:29.520 }, 00:18:29.520 { 00:18:29.520 "name": "BaseBdev2", 00:18:29.520 "uuid": "e58cedf7-4085-5fce-bf89-cf523a9e374c", 00:18:29.520 "is_configured": true, 00:18:29.520 "data_offset": 2048, 00:18:29.520 "data_size": 63488 00:18:29.520 } 00:18:29.520 ] 00:18:29.520 }' 00:18:29.520 13:38:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:29.520 13:38:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:29.520 13:38:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:29.520 13:38:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:29.520 13:38:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:18:29.520 13:38:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:18:29.520 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:18:29.520 13:38:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:18:29.520 13:38:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:18:29.520 13:38:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:18:29.520 13:38:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=416 00:18:29.520 13:38:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:29.520 13:38:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:29.520 13:38:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:29.520 13:38:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:29.520 13:38:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:29.520 13:38:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:29.520 13:38:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:29.520 13:38:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:29.520 13:38:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.520 13:38:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:29.780 13:38:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.780 [2024-11-20 13:38:29.018504] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:18:29.780 13:38:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:29.780 "name": "raid_bdev1", 00:18:29.780 "uuid": "d481f70d-6d38-4ec0-b17a-a2a6d97b8c2d", 00:18:29.780 "strip_size_kb": 0, 00:18:29.780 "state": "online", 00:18:29.780 "raid_level": "raid1", 00:18:29.780 "superblock": true, 00:18:29.780 "num_base_bdevs": 2, 00:18:29.780 "num_base_bdevs_discovered": 2, 00:18:29.780 "num_base_bdevs_operational": 2, 00:18:29.780 "process": { 00:18:29.780 "type": "rebuild", 00:18:29.780 "target": "spare", 00:18:29.780 "progress": { 00:18:29.780 "blocks": 14336, 00:18:29.780 "percent": 22 00:18:29.780 } 00:18:29.780 }, 00:18:29.780 "base_bdevs_list": [ 00:18:29.780 { 00:18:29.780 "name": "spare", 00:18:29.780 "uuid": "559fa14a-88ff-58ba-8e15-e701564a0330", 00:18:29.780 "is_configured": true, 00:18:29.780 "data_offset": 2048, 00:18:29.780 "data_size": 63488 00:18:29.780 }, 00:18:29.780 { 00:18:29.780 "name": "BaseBdev2", 00:18:29.780 "uuid": "e58cedf7-4085-5fce-bf89-cf523a9e374c", 00:18:29.780 "is_configured": true, 00:18:29.780 "data_offset": 2048, 00:18:29.780 "data_size": 63488 00:18:29.780 } 00:18:29.780 ] 00:18:29.780 }' 00:18:29.780 13:38:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:29.780 13:38:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:29.780 13:38:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:29.780 13:38:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:29.780 13:38:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:30.038 [2024-11-20 13:38:29.344129] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:18:30.038 [2024-11-20 13:38:29.344711] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:18:30.038 148.00 IOPS, 444.00 MiB/s [2024-11-20T13:38:29.523Z] [2024-11-20 13:38:29.468629] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:18:30.607 [2024-11-20 13:38:29.793559] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:18:30.866 [2024-11-20 13:38:30.127093] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:18:30.866 13:38:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:30.866 13:38:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:30.866 13:38:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:30.866 13:38:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:30.866 13:38:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:30.866 13:38:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:30.866 13:38:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:30.866 13:38:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:30.866 13:38:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.866 13:38:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:30.866 13:38:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.866 13:38:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:30.866 "name": "raid_bdev1", 00:18:30.866 "uuid": "d481f70d-6d38-4ec0-b17a-a2a6d97b8c2d", 00:18:30.866 "strip_size_kb": 0, 00:18:30.866 "state": "online", 00:18:30.866 "raid_level": "raid1", 00:18:30.866 "superblock": true, 00:18:30.866 "num_base_bdevs": 2, 00:18:30.866 "num_base_bdevs_discovered": 2, 00:18:30.866 "num_base_bdevs_operational": 2, 00:18:30.866 "process": { 00:18:30.866 "type": "rebuild", 00:18:30.866 "target": "spare", 00:18:30.866 "progress": { 00:18:30.866 "blocks": 32768, 00:18:30.866 "percent": 51 00:18:30.866 } 00:18:30.866 }, 00:18:30.866 "base_bdevs_list": [ 00:18:30.866 { 00:18:30.866 "name": "spare", 00:18:30.866 "uuid": "559fa14a-88ff-58ba-8e15-e701564a0330", 00:18:30.866 "is_configured": true, 00:18:30.866 "data_offset": 2048, 00:18:30.866 "data_size": 63488 00:18:30.866 }, 00:18:30.866 { 00:18:30.866 "name": "BaseBdev2", 00:18:30.866 "uuid": "e58cedf7-4085-5fce-bf89-cf523a9e374c", 00:18:30.866 "is_configured": true, 00:18:30.866 "data_offset": 2048, 00:18:30.866 "data_size": 63488 00:18:30.866 } 00:18:30.866 ] 00:18:30.866 }' 00:18:30.866 13:38:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:30.866 13:38:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:30.866 13:38:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:30.866 13:38:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:30.866 13:38:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:31.126 128.60 IOPS, 385.80 MiB/s [2024-11-20T13:38:30.611Z] [2024-11-20 13:38:30.582725] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:18:31.690 [2024-11-20 13:38:30.919690] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:18:31.948 13:38:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:31.948 13:38:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:31.948 13:38:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:31.948 13:38:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:31.948 13:38:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:31.948 13:38:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:31.948 13:38:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:31.948 13:38:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:31.948 13:38:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.948 13:38:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:31.948 13:38:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.949 13:38:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:31.949 "name": "raid_bdev1", 00:18:31.949 "uuid": "d481f70d-6d38-4ec0-b17a-a2a6d97b8c2d", 00:18:31.949 "strip_size_kb": 0, 00:18:31.949 "state": "online", 00:18:31.949 "raid_level": "raid1", 00:18:31.949 "superblock": true, 00:18:31.949 "num_base_bdevs": 2, 00:18:31.949 "num_base_bdevs_discovered": 2, 00:18:31.949 "num_base_bdevs_operational": 2, 00:18:31.949 "process": { 00:18:31.949 "type": "rebuild", 00:18:31.949 "target": "spare", 00:18:31.949 "progress": { 00:18:31.949 "blocks": 51200, 00:18:31.949 "percent": 80 00:18:31.949 } 00:18:31.949 }, 00:18:31.949 "base_bdevs_list": [ 00:18:31.949 { 00:18:31.949 "name": "spare", 00:18:31.949 "uuid": "559fa14a-88ff-58ba-8e15-e701564a0330", 00:18:31.949 "is_configured": true, 00:18:31.949 "data_offset": 2048, 00:18:31.949 "data_size": 63488 00:18:31.949 }, 00:18:31.949 { 00:18:31.949 "name": "BaseBdev2", 00:18:31.949 "uuid": "e58cedf7-4085-5fce-bf89-cf523a9e374c", 00:18:31.949 "is_configured": true, 00:18:31.949 "data_offset": 2048, 00:18:31.949 "data_size": 63488 00:18:31.949 } 00:18:31.949 ] 00:18:31.949 }' 00:18:31.949 13:38:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:31.949 13:38:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:31.949 13:38:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:31.949 13:38:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:31.949 13:38:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:32.466 114.33 IOPS, 343.00 MiB/s [2024-11-20T13:38:31.951Z] [2024-11-20 13:38:31.886659] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:18:32.725 [2024-11-20 13:38:31.992164] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:18:32.725 [2024-11-20 13:38:31.995098] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:32.984 13:38:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:32.984 13:38:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:32.984 13:38:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:32.984 13:38:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:32.984 13:38:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:32.984 13:38:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:32.984 13:38:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:32.984 13:38:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:32.984 13:38:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.984 13:38:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:32.984 103.29 IOPS, 309.86 MiB/s [2024-11-20T13:38:32.469Z] 13:38:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.244 13:38:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:33.244 "name": "raid_bdev1", 00:18:33.244 "uuid": "d481f70d-6d38-4ec0-b17a-a2a6d97b8c2d", 00:18:33.244 "strip_size_kb": 0, 00:18:33.244 "state": "online", 00:18:33.244 "raid_level": "raid1", 00:18:33.244 "superblock": true, 00:18:33.244 "num_base_bdevs": 2, 00:18:33.244 "num_base_bdevs_discovered": 2, 00:18:33.244 "num_base_bdevs_operational": 2, 00:18:33.244 "base_bdevs_list": [ 00:18:33.244 { 00:18:33.244 "name": "spare", 00:18:33.244 "uuid": "559fa14a-88ff-58ba-8e15-e701564a0330", 00:18:33.244 "is_configured": true, 00:18:33.244 "data_offset": 2048, 00:18:33.244 "data_size": 63488 00:18:33.244 }, 00:18:33.244 { 00:18:33.244 "name": "BaseBdev2", 00:18:33.244 "uuid": "e58cedf7-4085-5fce-bf89-cf523a9e374c", 00:18:33.244 "is_configured": true, 00:18:33.244 "data_offset": 2048, 00:18:33.244 "data_size": 63488 00:18:33.244 } 00:18:33.244 ] 00:18:33.244 }' 00:18:33.244 13:38:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:33.244 13:38:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:18:33.244 13:38:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:33.244 13:38:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:18:33.244 13:38:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 00:18:33.244 13:38:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:33.244 13:38:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:33.244 13:38:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:33.244 13:38:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:33.244 13:38:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:33.244 13:38:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:33.244 13:38:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.244 13:38:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:33.244 13:38:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:33.244 13:38:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.244 13:38:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:33.244 "name": "raid_bdev1", 00:18:33.244 "uuid": "d481f70d-6d38-4ec0-b17a-a2a6d97b8c2d", 00:18:33.244 "strip_size_kb": 0, 00:18:33.244 "state": "online", 00:18:33.244 "raid_level": "raid1", 00:18:33.244 "superblock": true, 00:18:33.244 "num_base_bdevs": 2, 00:18:33.244 "num_base_bdevs_discovered": 2, 00:18:33.244 "num_base_bdevs_operational": 2, 00:18:33.244 "base_bdevs_list": [ 00:18:33.244 { 00:18:33.244 "name": "spare", 00:18:33.244 "uuid": "559fa14a-88ff-58ba-8e15-e701564a0330", 00:18:33.244 "is_configured": true, 00:18:33.244 "data_offset": 2048, 00:18:33.244 "data_size": 63488 00:18:33.244 }, 00:18:33.244 { 00:18:33.244 "name": "BaseBdev2", 00:18:33.244 "uuid": "e58cedf7-4085-5fce-bf89-cf523a9e374c", 00:18:33.244 "is_configured": true, 00:18:33.244 "data_offset": 2048, 00:18:33.244 "data_size": 63488 00:18:33.244 } 00:18:33.244 ] 00:18:33.244 }' 00:18:33.244 13:38:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:33.244 13:38:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:33.244 13:38:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:33.244 13:38:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:33.244 13:38:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:33.244 13:38:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:33.244 13:38:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:33.244 13:38:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:33.244 13:38:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:33.244 13:38:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:33.244 13:38:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:33.244 13:38:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:33.244 13:38:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:33.244 13:38:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:33.244 13:38:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:33.244 13:38:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:33.244 13:38:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.244 13:38:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:33.244 13:38:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.244 13:38:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:33.244 "name": "raid_bdev1", 00:18:33.244 "uuid": "d481f70d-6d38-4ec0-b17a-a2a6d97b8c2d", 00:18:33.244 "strip_size_kb": 0, 00:18:33.244 "state": "online", 00:18:33.245 "raid_level": "raid1", 00:18:33.245 "superblock": true, 00:18:33.245 "num_base_bdevs": 2, 00:18:33.245 "num_base_bdevs_discovered": 2, 00:18:33.245 "num_base_bdevs_operational": 2, 00:18:33.245 "base_bdevs_list": [ 00:18:33.245 { 00:18:33.245 "name": "spare", 00:18:33.245 "uuid": "559fa14a-88ff-58ba-8e15-e701564a0330", 00:18:33.245 "is_configured": true, 00:18:33.245 "data_offset": 2048, 00:18:33.245 "data_size": 63488 00:18:33.245 }, 00:18:33.245 { 00:18:33.245 "name": "BaseBdev2", 00:18:33.245 "uuid": "e58cedf7-4085-5fce-bf89-cf523a9e374c", 00:18:33.245 "is_configured": true, 00:18:33.245 "data_offset": 2048, 00:18:33.245 "data_size": 63488 00:18:33.245 } 00:18:33.245 ] 00:18:33.245 }' 00:18:33.245 13:38:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:33.245 13:38:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:33.812 13:38:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:33.812 13:38:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.812 13:38:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:33.812 [2024-11-20 13:38:33.086626] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:33.812 [2024-11-20 13:38:33.086669] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:33.812 00:18:33.812 Latency(us) 00:18:33.812 [2024-11-20T13:38:33.297Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:33.812 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:18:33.812 raid_bdev1 : 7.75 95.95 287.85 0.00 0.00 14338.76 314.19 109489.86 00:18:33.812 [2024-11-20T13:38:33.297Z] =================================================================================================================== 00:18:33.812 [2024-11-20T13:38:33.297Z] Total : 95.95 287.85 0.00 0.00 14338.76 314.19 109489.86 00:18:33.812 [2024-11-20 13:38:33.208156] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:33.812 [2024-11-20 13:38:33.208389] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:33.812 [2024-11-20 13:38:33.208483] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:33.812 [2024-11-20 13:38:33.208496] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:18:33.812 { 00:18:33.812 "results": [ 00:18:33.812 { 00:18:33.812 "job": "raid_bdev1", 00:18:33.812 "core_mask": "0x1", 00:18:33.812 "workload": "randrw", 00:18:33.812 "percentage": 50, 00:18:33.812 "status": "finished", 00:18:33.812 "queue_depth": 2, 00:18:33.812 "io_size": 3145728, 00:18:33.812 "runtime": 7.753986, 00:18:33.812 "iops": 95.95065041386457, 00:18:33.812 "mibps": 287.8519512415937, 00:18:33.812 "io_failed": 0, 00:18:33.812 "io_timeout": 0, 00:18:33.812 "avg_latency_us": 14338.761532150107, 00:18:33.812 "min_latency_us": 314.19116465863453, 00:18:33.812 "max_latency_us": 109489.86345381525 00:18:33.812 } 00:18:33.812 ], 00:18:33.812 "core_count": 1 00:18:33.812 } 00:18:33.813 13:38:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.813 13:38:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:33.813 13:38:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.813 13:38:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:33.813 13:38:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 00:18:33.813 13:38:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.813 13:38:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:18:33.813 13:38:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:18:33.813 13:38:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:18:33.813 13:38:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:18:33.813 13:38:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:18:33.813 13:38:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:18:33.813 13:38:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:33.813 13:38:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:18:33.813 13:38:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:33.813 13:38:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:18:33.813 13:38:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:33.813 13:38:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:33.813 13:38:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:18:34.071 /dev/nbd0 00:18:34.071 13:38:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:34.071 13:38:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:34.071 13:38:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:18:34.071 13:38:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:18:34.071 13:38:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:34.071 13:38:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:34.071 13:38:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:18:34.071 13:38:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:18:34.071 13:38:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:34.071 13:38:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:34.072 13:38:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:34.072 1+0 records in 00:18:34.072 1+0 records out 00:18:34.072 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00193853 s, 2.1 MB/s 00:18:34.072 13:38:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:34.072 13:38:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:18:34.072 13:38:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:34.072 13:38:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:34.072 13:38:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:18:34.072 13:38:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:34.072 13:38:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:34.072 13:38:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:18:34.072 13:38:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 00:18:34.072 13:38:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 00:18:34.072 13:38:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:18:34.072 13:38:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:18:34.072 13:38:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:34.072 13:38:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:18:34.072 13:38:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:34.072 13:38:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:18:34.072 13:38:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:34.072 13:38:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:34.072 13:38:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:18:34.330 /dev/nbd1 00:18:34.330 13:38:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:18:34.330 13:38:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:18:34.330 13:38:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:18:34.330 13:38:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:18:34.330 13:38:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:34.330 13:38:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:34.330 13:38:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:18:34.330 13:38:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:18:34.330 13:38:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:34.588 13:38:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:34.588 13:38:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:34.588 1+0 records in 00:18:34.588 1+0 records out 00:18:34.588 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000366753 s, 11.2 MB/s 00:18:34.588 13:38:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:34.588 13:38:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:18:34.588 13:38:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:34.588 13:38:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:34.588 13:38:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:18:34.588 13:38:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:34.588 13:38:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:34.588 13:38:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:18:34.588 13:38:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:18:34.588 13:38:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:18:34.588 13:38:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:18:34.588 13:38:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:34.588 13:38:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:18:34.588 13:38:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:34.588 13:38:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:18:34.847 13:38:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:18:34.847 13:38:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:18:34.847 13:38:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:18:34.847 13:38:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:34.847 13:38:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:34.847 13:38:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:18:34.847 13:38:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:18:34.847 13:38:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:18:34.847 13:38:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:18:34.847 13:38:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:18:34.847 13:38:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:18:34.847 13:38:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:34.847 13:38:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:18:34.847 13:38:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:34.847 13:38:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:18:35.106 13:38:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:35.106 13:38:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:35.106 13:38:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:35.106 13:38:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:35.106 13:38:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:35.106 13:38:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:35.106 13:38:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:18:35.106 13:38:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:18:35.106 13:38:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:18:35.106 13:38:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:18:35.106 13:38:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.106 13:38:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:35.106 13:38:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.106 13:38:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:35.106 13:38:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.106 13:38:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:35.106 [2024-11-20 13:38:34.519423] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:35.106 [2024-11-20 13:38:34.519481] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:35.106 [2024-11-20 13:38:34.519507] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:18:35.106 [2024-11-20 13:38:34.519519] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:35.106 [2024-11-20 13:38:34.522045] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:35.106 [2024-11-20 13:38:34.522099] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:35.106 [2024-11-20 13:38:34.522198] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:18:35.106 [2024-11-20 13:38:34.522251] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:35.106 [2024-11-20 13:38:34.522449] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:35.106 spare 00:18:35.106 13:38:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.106 13:38:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:18:35.106 13:38:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.106 13:38:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:35.365 [2024-11-20 13:38:34.622406] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:18:35.365 [2024-11-20 13:38:34.622470] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:18:35.365 [2024-11-20 13:38:34.622825] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b0d0 00:18:35.365 [2024-11-20 13:38:34.623014] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:18:35.365 [2024-11-20 13:38:34.623027] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:18:35.365 [2024-11-20 13:38:34.623295] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:35.365 13:38:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.365 13:38:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:35.365 13:38:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:35.365 13:38:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:35.365 13:38:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:35.365 13:38:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:35.365 13:38:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:35.365 13:38:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:35.365 13:38:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:35.365 13:38:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:35.365 13:38:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:35.365 13:38:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:35.365 13:38:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:35.365 13:38:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.365 13:38:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:35.365 13:38:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.365 13:38:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:35.365 "name": "raid_bdev1", 00:18:35.365 "uuid": "d481f70d-6d38-4ec0-b17a-a2a6d97b8c2d", 00:18:35.365 "strip_size_kb": 0, 00:18:35.365 "state": "online", 00:18:35.365 "raid_level": "raid1", 00:18:35.365 "superblock": true, 00:18:35.365 "num_base_bdevs": 2, 00:18:35.365 "num_base_bdevs_discovered": 2, 00:18:35.365 "num_base_bdevs_operational": 2, 00:18:35.365 "base_bdevs_list": [ 00:18:35.365 { 00:18:35.365 "name": "spare", 00:18:35.365 "uuid": "559fa14a-88ff-58ba-8e15-e701564a0330", 00:18:35.365 "is_configured": true, 00:18:35.365 "data_offset": 2048, 00:18:35.365 "data_size": 63488 00:18:35.365 }, 00:18:35.365 { 00:18:35.365 "name": "BaseBdev2", 00:18:35.365 "uuid": "e58cedf7-4085-5fce-bf89-cf523a9e374c", 00:18:35.365 "is_configured": true, 00:18:35.365 "data_offset": 2048, 00:18:35.365 "data_size": 63488 00:18:35.365 } 00:18:35.365 ] 00:18:35.365 }' 00:18:35.365 13:38:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:35.365 13:38:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:35.624 13:38:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:35.624 13:38:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:35.624 13:38:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:35.624 13:38:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:35.624 13:38:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:35.624 13:38:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:35.624 13:38:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:35.624 13:38:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.624 13:38:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:35.624 13:38:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.624 13:38:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:35.624 "name": "raid_bdev1", 00:18:35.624 "uuid": "d481f70d-6d38-4ec0-b17a-a2a6d97b8c2d", 00:18:35.624 "strip_size_kb": 0, 00:18:35.624 "state": "online", 00:18:35.624 "raid_level": "raid1", 00:18:35.624 "superblock": true, 00:18:35.624 "num_base_bdevs": 2, 00:18:35.624 "num_base_bdevs_discovered": 2, 00:18:35.624 "num_base_bdevs_operational": 2, 00:18:35.624 "base_bdevs_list": [ 00:18:35.624 { 00:18:35.624 "name": "spare", 00:18:35.624 "uuid": "559fa14a-88ff-58ba-8e15-e701564a0330", 00:18:35.624 "is_configured": true, 00:18:35.624 "data_offset": 2048, 00:18:35.624 "data_size": 63488 00:18:35.624 }, 00:18:35.624 { 00:18:35.624 "name": "BaseBdev2", 00:18:35.624 "uuid": "e58cedf7-4085-5fce-bf89-cf523a9e374c", 00:18:35.624 "is_configured": true, 00:18:35.624 "data_offset": 2048, 00:18:35.624 "data_size": 63488 00:18:35.624 } 00:18:35.624 ] 00:18:35.624 }' 00:18:35.624 13:38:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:35.882 13:38:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:35.882 13:38:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:35.882 13:38:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:35.882 13:38:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:35.882 13:38:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:18:35.882 13:38:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.882 13:38:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:35.882 13:38:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.882 13:38:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:18:35.882 13:38:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:18:35.882 13:38:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.882 13:38:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:35.883 [2024-11-20 13:38:35.246589] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:35.883 13:38:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.883 13:38:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:35.883 13:38:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:35.883 13:38:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:35.883 13:38:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:35.883 13:38:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:35.883 13:38:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:35.883 13:38:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:35.883 13:38:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:35.883 13:38:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:35.883 13:38:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:35.883 13:38:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:35.883 13:38:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:35.883 13:38:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.883 13:38:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:35.883 13:38:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.883 13:38:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:35.883 "name": "raid_bdev1", 00:18:35.883 "uuid": "d481f70d-6d38-4ec0-b17a-a2a6d97b8c2d", 00:18:35.883 "strip_size_kb": 0, 00:18:35.883 "state": "online", 00:18:35.883 "raid_level": "raid1", 00:18:35.883 "superblock": true, 00:18:35.883 "num_base_bdevs": 2, 00:18:35.883 "num_base_bdevs_discovered": 1, 00:18:35.883 "num_base_bdevs_operational": 1, 00:18:35.883 "base_bdevs_list": [ 00:18:35.883 { 00:18:35.883 "name": null, 00:18:35.883 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:35.883 "is_configured": false, 00:18:35.883 "data_offset": 0, 00:18:35.883 "data_size": 63488 00:18:35.883 }, 00:18:35.883 { 00:18:35.883 "name": "BaseBdev2", 00:18:35.883 "uuid": "e58cedf7-4085-5fce-bf89-cf523a9e374c", 00:18:35.883 "is_configured": true, 00:18:35.883 "data_offset": 2048, 00:18:35.883 "data_size": 63488 00:18:35.883 } 00:18:35.883 ] 00:18:35.883 }' 00:18:35.883 13:38:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:35.883 13:38:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:36.450 13:38:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:36.450 13:38:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.450 13:38:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:36.450 [2024-11-20 13:38:35.646491] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:36.450 [2024-11-20 13:38:35.646839] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:18:36.450 [2024-11-20 13:38:35.646868] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:18:36.450 [2024-11-20 13:38:35.646944] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:36.450 [2024-11-20 13:38:35.663761] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b1a0 00:18:36.450 13:38:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.450 13:38:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 00:18:36.450 [2024-11-20 13:38:35.666128] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:37.403 13:38:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:37.403 13:38:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:37.403 13:38:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:37.403 13:38:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:37.403 13:38:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:37.403 13:38:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:37.403 13:38:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:37.403 13:38:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.403 13:38:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:37.403 13:38:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.403 13:38:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:37.403 "name": "raid_bdev1", 00:18:37.403 "uuid": "d481f70d-6d38-4ec0-b17a-a2a6d97b8c2d", 00:18:37.403 "strip_size_kb": 0, 00:18:37.403 "state": "online", 00:18:37.403 "raid_level": "raid1", 00:18:37.403 "superblock": true, 00:18:37.403 "num_base_bdevs": 2, 00:18:37.403 "num_base_bdevs_discovered": 2, 00:18:37.403 "num_base_bdevs_operational": 2, 00:18:37.403 "process": { 00:18:37.403 "type": "rebuild", 00:18:37.403 "target": "spare", 00:18:37.403 "progress": { 00:18:37.403 "blocks": 20480, 00:18:37.403 "percent": 32 00:18:37.403 } 00:18:37.403 }, 00:18:37.403 "base_bdevs_list": [ 00:18:37.403 { 00:18:37.403 "name": "spare", 00:18:37.403 "uuid": "559fa14a-88ff-58ba-8e15-e701564a0330", 00:18:37.403 "is_configured": true, 00:18:37.403 "data_offset": 2048, 00:18:37.403 "data_size": 63488 00:18:37.403 }, 00:18:37.403 { 00:18:37.404 "name": "BaseBdev2", 00:18:37.404 "uuid": "e58cedf7-4085-5fce-bf89-cf523a9e374c", 00:18:37.404 "is_configured": true, 00:18:37.404 "data_offset": 2048, 00:18:37.404 "data_size": 63488 00:18:37.404 } 00:18:37.404 ] 00:18:37.404 }' 00:18:37.404 13:38:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:37.404 13:38:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:37.404 13:38:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:37.404 13:38:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:37.404 13:38:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:18:37.404 13:38:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.404 13:38:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:37.404 [2024-11-20 13:38:36.801707] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:37.404 [2024-11-20 13:38:36.871706] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:37.404 [2024-11-20 13:38:36.871994] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:37.404 [2024-11-20 13:38:36.872018] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:37.404 [2024-11-20 13:38:36.872032] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:37.663 13:38:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.663 13:38:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:37.663 13:38:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:37.663 13:38:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:37.663 13:38:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:37.663 13:38:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:37.663 13:38:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:37.663 13:38:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:37.663 13:38:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:37.663 13:38:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:37.663 13:38:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:37.663 13:38:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:37.663 13:38:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:37.663 13:38:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.663 13:38:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:37.663 13:38:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.663 13:38:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:37.663 "name": "raid_bdev1", 00:18:37.663 "uuid": "d481f70d-6d38-4ec0-b17a-a2a6d97b8c2d", 00:18:37.663 "strip_size_kb": 0, 00:18:37.663 "state": "online", 00:18:37.663 "raid_level": "raid1", 00:18:37.663 "superblock": true, 00:18:37.663 "num_base_bdevs": 2, 00:18:37.663 "num_base_bdevs_discovered": 1, 00:18:37.663 "num_base_bdevs_operational": 1, 00:18:37.663 "base_bdevs_list": [ 00:18:37.663 { 00:18:37.663 "name": null, 00:18:37.663 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:37.663 "is_configured": false, 00:18:37.663 "data_offset": 0, 00:18:37.663 "data_size": 63488 00:18:37.663 }, 00:18:37.663 { 00:18:37.663 "name": "BaseBdev2", 00:18:37.663 "uuid": "e58cedf7-4085-5fce-bf89-cf523a9e374c", 00:18:37.663 "is_configured": true, 00:18:37.663 "data_offset": 2048, 00:18:37.663 "data_size": 63488 00:18:37.663 } 00:18:37.663 ] 00:18:37.663 }' 00:18:37.663 13:38:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:37.663 13:38:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:37.922 13:38:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:37.922 13:38:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.922 13:38:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:37.922 [2024-11-20 13:38:37.367340] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:37.922 [2024-11-20 13:38:37.367556] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:37.922 [2024-11-20 13:38:37.367692] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:18:37.922 [2024-11-20 13:38:37.367789] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:37.922 [2024-11-20 13:38:37.368326] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:37.922 [2024-11-20 13:38:37.368354] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:37.922 [2024-11-20 13:38:37.368460] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:18:37.922 [2024-11-20 13:38:37.368478] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:18:37.922 [2024-11-20 13:38:37.368491] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:18:37.922 [2024-11-20 13:38:37.368516] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:37.922 [2024-11-20 13:38:37.385612] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b270 00:18:37.922 spare 00:18:37.922 13:38:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.922 13:38:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 00:18:37.922 [2024-11-20 13:38:37.388031] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:39.301 13:38:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:39.301 13:38:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:39.301 13:38:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:39.301 13:38:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:39.301 13:38:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:39.301 13:38:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:39.301 13:38:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:39.301 13:38:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.301 13:38:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:39.301 13:38:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.301 13:38:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:39.301 "name": "raid_bdev1", 00:18:39.301 "uuid": "d481f70d-6d38-4ec0-b17a-a2a6d97b8c2d", 00:18:39.301 "strip_size_kb": 0, 00:18:39.301 "state": "online", 00:18:39.301 "raid_level": "raid1", 00:18:39.301 "superblock": true, 00:18:39.301 "num_base_bdevs": 2, 00:18:39.301 "num_base_bdevs_discovered": 2, 00:18:39.301 "num_base_bdevs_operational": 2, 00:18:39.301 "process": { 00:18:39.301 "type": "rebuild", 00:18:39.301 "target": "spare", 00:18:39.301 "progress": { 00:18:39.301 "blocks": 20480, 00:18:39.301 "percent": 32 00:18:39.301 } 00:18:39.301 }, 00:18:39.301 "base_bdevs_list": [ 00:18:39.301 { 00:18:39.301 "name": "spare", 00:18:39.301 "uuid": "559fa14a-88ff-58ba-8e15-e701564a0330", 00:18:39.301 "is_configured": true, 00:18:39.301 "data_offset": 2048, 00:18:39.301 "data_size": 63488 00:18:39.301 }, 00:18:39.301 { 00:18:39.301 "name": "BaseBdev2", 00:18:39.301 "uuid": "e58cedf7-4085-5fce-bf89-cf523a9e374c", 00:18:39.301 "is_configured": true, 00:18:39.301 "data_offset": 2048, 00:18:39.301 "data_size": 63488 00:18:39.301 } 00:18:39.301 ] 00:18:39.301 }' 00:18:39.301 13:38:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:39.301 13:38:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:39.301 13:38:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:39.301 13:38:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:39.301 13:38:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:18:39.301 13:38:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.301 13:38:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:39.301 [2024-11-20 13:38:38.535493] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:39.301 [2024-11-20 13:38:38.593565] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:39.301 [2024-11-20 13:38:38.593648] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:39.301 [2024-11-20 13:38:38.593671] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:39.301 [2024-11-20 13:38:38.593680] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:39.301 13:38:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.301 13:38:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:39.301 13:38:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:39.301 13:38:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:39.301 13:38:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:39.301 13:38:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:39.301 13:38:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:39.301 13:38:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:39.301 13:38:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:39.301 13:38:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:39.301 13:38:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:39.301 13:38:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:39.301 13:38:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.301 13:38:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:39.301 13:38:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:39.301 13:38:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.301 13:38:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:39.301 "name": "raid_bdev1", 00:18:39.301 "uuid": "d481f70d-6d38-4ec0-b17a-a2a6d97b8c2d", 00:18:39.301 "strip_size_kb": 0, 00:18:39.301 "state": "online", 00:18:39.301 "raid_level": "raid1", 00:18:39.301 "superblock": true, 00:18:39.301 "num_base_bdevs": 2, 00:18:39.302 "num_base_bdevs_discovered": 1, 00:18:39.302 "num_base_bdevs_operational": 1, 00:18:39.302 "base_bdevs_list": [ 00:18:39.302 { 00:18:39.302 "name": null, 00:18:39.302 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:39.302 "is_configured": false, 00:18:39.302 "data_offset": 0, 00:18:39.302 "data_size": 63488 00:18:39.302 }, 00:18:39.302 { 00:18:39.302 "name": "BaseBdev2", 00:18:39.302 "uuid": "e58cedf7-4085-5fce-bf89-cf523a9e374c", 00:18:39.302 "is_configured": true, 00:18:39.302 "data_offset": 2048, 00:18:39.302 "data_size": 63488 00:18:39.302 } 00:18:39.302 ] 00:18:39.302 }' 00:18:39.302 13:38:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:39.302 13:38:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:39.883 13:38:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:39.883 13:38:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:39.883 13:38:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:39.883 13:38:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:39.883 13:38:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:39.883 13:38:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:39.883 13:38:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:39.883 13:38:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.883 13:38:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:39.883 13:38:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.883 13:38:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:39.883 "name": "raid_bdev1", 00:18:39.883 "uuid": "d481f70d-6d38-4ec0-b17a-a2a6d97b8c2d", 00:18:39.883 "strip_size_kb": 0, 00:18:39.883 "state": "online", 00:18:39.883 "raid_level": "raid1", 00:18:39.883 "superblock": true, 00:18:39.883 "num_base_bdevs": 2, 00:18:39.883 "num_base_bdevs_discovered": 1, 00:18:39.883 "num_base_bdevs_operational": 1, 00:18:39.883 "base_bdevs_list": [ 00:18:39.883 { 00:18:39.883 "name": null, 00:18:39.883 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:39.883 "is_configured": false, 00:18:39.883 "data_offset": 0, 00:18:39.883 "data_size": 63488 00:18:39.883 }, 00:18:39.883 { 00:18:39.883 "name": "BaseBdev2", 00:18:39.883 "uuid": "e58cedf7-4085-5fce-bf89-cf523a9e374c", 00:18:39.883 "is_configured": true, 00:18:39.883 "data_offset": 2048, 00:18:39.883 "data_size": 63488 00:18:39.883 } 00:18:39.883 ] 00:18:39.883 }' 00:18:39.883 13:38:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:39.883 13:38:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:39.883 13:38:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:39.883 13:38:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:39.883 13:38:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:18:39.883 13:38:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.883 13:38:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:39.883 13:38:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.883 13:38:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:39.883 13:38:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.883 13:38:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:39.883 [2024-11-20 13:38:39.220129] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:39.883 [2024-11-20 13:38:39.220191] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:39.883 [2024-11-20 13:38:39.220227] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:18:39.883 [2024-11-20 13:38:39.220241] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:39.883 [2024-11-20 13:38:39.220707] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:39.883 [2024-11-20 13:38:39.220732] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:39.883 [2024-11-20 13:38:39.220819] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:18:39.883 [2024-11-20 13:38:39.220835] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:18:39.883 [2024-11-20 13:38:39.220851] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:18:39.883 [2024-11-20 13:38:39.220864] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:18:39.883 BaseBdev1 00:18:39.883 13:38:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.883 13:38:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:18:40.817 13:38:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:40.817 13:38:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:40.817 13:38:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:40.817 13:38:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:40.817 13:38:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:40.817 13:38:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:40.817 13:38:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:40.817 13:38:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:40.817 13:38:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:40.817 13:38:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:40.817 13:38:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:40.817 13:38:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:40.817 13:38:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.817 13:38:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:40.817 13:38:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.817 13:38:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:40.817 "name": "raid_bdev1", 00:18:40.817 "uuid": "d481f70d-6d38-4ec0-b17a-a2a6d97b8c2d", 00:18:40.817 "strip_size_kb": 0, 00:18:40.817 "state": "online", 00:18:40.817 "raid_level": "raid1", 00:18:40.817 "superblock": true, 00:18:40.817 "num_base_bdevs": 2, 00:18:40.817 "num_base_bdevs_discovered": 1, 00:18:40.817 "num_base_bdevs_operational": 1, 00:18:40.817 "base_bdevs_list": [ 00:18:40.817 { 00:18:40.817 "name": null, 00:18:40.817 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:40.817 "is_configured": false, 00:18:40.817 "data_offset": 0, 00:18:40.817 "data_size": 63488 00:18:40.817 }, 00:18:40.817 { 00:18:40.817 "name": "BaseBdev2", 00:18:40.818 "uuid": "e58cedf7-4085-5fce-bf89-cf523a9e374c", 00:18:40.818 "is_configured": true, 00:18:40.818 "data_offset": 2048, 00:18:40.818 "data_size": 63488 00:18:40.818 } 00:18:40.818 ] 00:18:40.818 }' 00:18:40.818 13:38:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:40.818 13:38:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:41.385 13:38:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:41.385 13:38:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:41.385 13:38:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:41.385 13:38:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:41.385 13:38:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:41.385 13:38:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:41.385 13:38:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:41.385 13:38:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.385 13:38:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:41.385 13:38:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.385 13:38:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:41.385 "name": "raid_bdev1", 00:18:41.385 "uuid": "d481f70d-6d38-4ec0-b17a-a2a6d97b8c2d", 00:18:41.385 "strip_size_kb": 0, 00:18:41.385 "state": "online", 00:18:41.385 "raid_level": "raid1", 00:18:41.385 "superblock": true, 00:18:41.385 "num_base_bdevs": 2, 00:18:41.385 "num_base_bdevs_discovered": 1, 00:18:41.385 "num_base_bdevs_operational": 1, 00:18:41.385 "base_bdevs_list": [ 00:18:41.385 { 00:18:41.386 "name": null, 00:18:41.386 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:41.386 "is_configured": false, 00:18:41.386 "data_offset": 0, 00:18:41.386 "data_size": 63488 00:18:41.386 }, 00:18:41.386 { 00:18:41.386 "name": "BaseBdev2", 00:18:41.386 "uuid": "e58cedf7-4085-5fce-bf89-cf523a9e374c", 00:18:41.386 "is_configured": true, 00:18:41.386 "data_offset": 2048, 00:18:41.386 "data_size": 63488 00:18:41.386 } 00:18:41.386 ] 00:18:41.386 }' 00:18:41.386 13:38:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:41.386 13:38:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:41.386 13:38:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:41.386 13:38:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:41.386 13:38:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:41.386 13:38:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # local es=0 00:18:41.386 13:38:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:41.386 13:38:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:18:41.386 13:38:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:41.386 13:38:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:18:41.386 13:38:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:41.386 13:38:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:41.386 13:38:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.386 13:38:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:41.386 [2024-11-20 13:38:40.802496] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:41.386 [2024-11-20 13:38:40.802798] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:18:41.386 [2024-11-20 13:38:40.802830] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:18:41.386 request: 00:18:41.386 { 00:18:41.386 "base_bdev": "BaseBdev1", 00:18:41.386 "raid_bdev": "raid_bdev1", 00:18:41.386 "method": "bdev_raid_add_base_bdev", 00:18:41.386 "req_id": 1 00:18:41.386 } 00:18:41.386 Got JSON-RPC error response 00:18:41.386 response: 00:18:41.386 { 00:18:41.386 "code": -22, 00:18:41.386 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:18:41.386 } 00:18:41.386 13:38:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:18:41.386 13:38:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # es=1 00:18:41.386 13:38:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:41.386 13:38:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:41.386 13:38:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:41.386 13:38:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 00:18:42.761 13:38:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:42.761 13:38:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:42.761 13:38:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:42.761 13:38:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:42.761 13:38:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:42.761 13:38:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:42.761 13:38:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:42.761 13:38:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:42.761 13:38:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:42.761 13:38:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:42.761 13:38:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:42.761 13:38:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:42.761 13:38:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.761 13:38:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:42.761 13:38:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.761 13:38:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:42.761 "name": "raid_bdev1", 00:18:42.761 "uuid": "d481f70d-6d38-4ec0-b17a-a2a6d97b8c2d", 00:18:42.761 "strip_size_kb": 0, 00:18:42.761 "state": "online", 00:18:42.761 "raid_level": "raid1", 00:18:42.761 "superblock": true, 00:18:42.761 "num_base_bdevs": 2, 00:18:42.761 "num_base_bdevs_discovered": 1, 00:18:42.761 "num_base_bdevs_operational": 1, 00:18:42.761 "base_bdevs_list": [ 00:18:42.761 { 00:18:42.761 "name": null, 00:18:42.761 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:42.761 "is_configured": false, 00:18:42.761 "data_offset": 0, 00:18:42.761 "data_size": 63488 00:18:42.761 }, 00:18:42.761 { 00:18:42.761 "name": "BaseBdev2", 00:18:42.761 "uuid": "e58cedf7-4085-5fce-bf89-cf523a9e374c", 00:18:42.761 "is_configured": true, 00:18:42.761 "data_offset": 2048, 00:18:42.761 "data_size": 63488 00:18:42.761 } 00:18:42.761 ] 00:18:42.761 }' 00:18:42.761 13:38:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:42.761 13:38:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:42.761 13:38:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:42.761 13:38:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:42.761 13:38:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:42.761 13:38:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:42.761 13:38:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:42.761 13:38:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:42.761 13:38:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.761 13:38:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:42.761 13:38:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:42.761 13:38:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.020 13:38:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:43.020 "name": "raid_bdev1", 00:18:43.020 "uuid": "d481f70d-6d38-4ec0-b17a-a2a6d97b8c2d", 00:18:43.020 "strip_size_kb": 0, 00:18:43.020 "state": "online", 00:18:43.020 "raid_level": "raid1", 00:18:43.020 "superblock": true, 00:18:43.020 "num_base_bdevs": 2, 00:18:43.020 "num_base_bdevs_discovered": 1, 00:18:43.020 "num_base_bdevs_operational": 1, 00:18:43.020 "base_bdevs_list": [ 00:18:43.020 { 00:18:43.020 "name": null, 00:18:43.020 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:43.020 "is_configured": false, 00:18:43.020 "data_offset": 0, 00:18:43.020 "data_size": 63488 00:18:43.020 }, 00:18:43.020 { 00:18:43.020 "name": "BaseBdev2", 00:18:43.020 "uuid": "e58cedf7-4085-5fce-bf89-cf523a9e374c", 00:18:43.020 "is_configured": true, 00:18:43.020 "data_offset": 2048, 00:18:43.020 "data_size": 63488 00:18:43.020 } 00:18:43.020 ] 00:18:43.020 }' 00:18:43.020 13:38:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:43.020 13:38:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:43.020 13:38:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:43.020 13:38:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:43.020 13:38:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 76623 00:18:43.020 13:38:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # '[' -z 76623 ']' 00:18:43.020 13:38:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@958 -- # kill -0 76623 00:18:43.020 13:38:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # uname 00:18:43.020 13:38:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:43.020 13:38:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76623 00:18:43.020 13:38:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:43.020 killing process with pid 76623 00:18:43.020 Received shutdown signal, test time was about 16.969051 seconds 00:18:43.020 00:18:43.020 Latency(us) 00:18:43.020 [2024-11-20T13:38:42.505Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:43.020 [2024-11-20T13:38:42.505Z] =================================================================================================================== 00:18:43.020 [2024-11-20T13:38:42.505Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:43.020 13:38:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:43.020 13:38:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76623' 00:18:43.020 13:38:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@973 -- # kill 76623 00:18:43.020 [2024-11-20 13:38:42.386502] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:43.020 13:38:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@978 -- # wait 76623 00:18:43.020 [2024-11-20 13:38:42.386648] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:43.020 [2024-11-20 13:38:42.386700] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:43.020 [2024-11-20 13:38:42.386714] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:18:43.279 [2024-11-20 13:38:42.619020] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:44.708 ************************************ 00:18:44.708 END TEST raid_rebuild_test_sb_io 00:18:44.708 ************************************ 00:18:44.708 13:38:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 00:18:44.708 00:18:44.708 real 0m20.150s 00:18:44.708 user 0m26.094s 00:18:44.708 sys 0m2.489s 00:18:44.708 13:38:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:44.708 13:38:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:44.708 13:38:43 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 00:18:44.708 13:38:43 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 4 false false true 00:18:44.708 13:38:43 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:18:44.708 13:38:43 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:44.708 13:38:43 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:44.708 ************************************ 00:18:44.708 START TEST raid_rebuild_test 00:18:44.708 ************************************ 00:18:44.708 13:38:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 false false true 00:18:44.708 13:38:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:18:44.708 13:38:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:18:44.708 13:38:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:18:44.708 13:38:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:18:44.708 13:38:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:18:44.708 13:38:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:18:44.708 13:38:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:44.708 13:38:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:18:44.708 13:38:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:44.708 13:38:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:44.708 13:38:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:18:44.708 13:38:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:44.708 13:38:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:44.708 13:38:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:18:44.708 13:38:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:44.708 13:38:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:44.708 13:38:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:18:44.708 13:38:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:44.708 13:38:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:44.708 13:38:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:18:44.708 13:38:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:18:44.708 13:38:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:18:44.708 13:38:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:18:44.708 13:38:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:18:44.708 13:38:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:18:44.708 13:38:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:18:44.708 13:38:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:18:44.708 13:38:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:18:44.708 13:38:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:18:44.708 13:38:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=77324 00:18:44.708 13:38:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:18:44.708 13:38:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 77324 00:18:44.708 13:38:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 77324 ']' 00:18:44.708 13:38:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:44.708 13:38:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:44.708 13:38:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:44.708 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:44.708 13:38:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:44.708 13:38:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:44.708 I/O size of 3145728 is greater than zero copy threshold (65536). 00:18:44.708 Zero copy mechanism will not be used. 00:18:44.708 [2024-11-20 13:38:44.053040] Starting SPDK v25.01-pre git sha1 82b85d9ca / DPDK 24.03.0 initialization... 00:18:44.708 [2024-11-20 13:38:44.053190] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77324 ] 00:18:44.966 [2024-11-20 13:38:44.238548] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:44.966 [2024-11-20 13:38:44.363677] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:45.224 [2024-11-20 13:38:44.586460] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:45.224 [2024-11-20 13:38:44.586527] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:45.482 13:38:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:45.482 13:38:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:18:45.482 13:38:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:45.482 13:38:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:18:45.482 13:38:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.482 13:38:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:45.482 BaseBdev1_malloc 00:18:45.482 13:38:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.482 13:38:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:45.482 13:38:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.482 13:38:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:45.482 [2024-11-20 13:38:44.951412] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:45.482 [2024-11-20 13:38:44.952504] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:45.482 [2024-11-20 13:38:44.952542] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:18:45.482 [2024-11-20 13:38:44.952559] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:45.482 [2024-11-20 13:38:44.955138] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:45.482 [2024-11-20 13:38:44.955187] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:45.482 BaseBdev1 00:18:45.482 13:38:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.482 13:38:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:45.482 13:38:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:18:45.482 13:38:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.483 13:38:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:45.741 BaseBdev2_malloc 00:18:45.741 13:38:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.741 13:38:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:18:45.741 13:38:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.741 13:38:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:45.741 [2024-11-20 13:38:45.010208] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:18:45.741 [2024-11-20 13:38:45.010287] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:45.741 [2024-11-20 13:38:45.010318] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:18:45.741 [2024-11-20 13:38:45.010334] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:45.741 [2024-11-20 13:38:45.012902] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:45.741 [2024-11-20 13:38:45.012947] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:18:45.741 BaseBdev2 00:18:45.741 13:38:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.741 13:38:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:45.741 13:38:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:18:45.741 13:38:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.741 13:38:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:45.741 BaseBdev3_malloc 00:18:45.741 13:38:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.741 13:38:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:18:45.741 13:38:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.741 13:38:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:45.741 [2024-11-20 13:38:45.086121] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:18:45.741 [2024-11-20 13:38:45.086182] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:45.741 [2024-11-20 13:38:45.086207] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:18:45.741 [2024-11-20 13:38:45.086223] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:45.741 [2024-11-20 13:38:45.088760] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:45.741 [2024-11-20 13:38:45.088810] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:18:45.741 BaseBdev3 00:18:45.741 13:38:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.741 13:38:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:45.741 13:38:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:18:45.741 13:38:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.741 13:38:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:45.741 BaseBdev4_malloc 00:18:45.741 13:38:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.741 13:38:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:18:45.741 13:38:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.741 13:38:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:45.741 [2024-11-20 13:38:45.144923] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:18:45.741 [2024-11-20 13:38:45.144991] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:45.741 [2024-11-20 13:38:45.145015] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:18:45.741 [2024-11-20 13:38:45.145030] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:45.741 [2024-11-20 13:38:45.147507] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:45.741 [2024-11-20 13:38:45.147557] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:18:45.741 BaseBdev4 00:18:45.741 13:38:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.741 13:38:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:18:45.741 13:38:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.741 13:38:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:45.741 spare_malloc 00:18:45.741 13:38:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.741 13:38:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:18:45.741 13:38:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.741 13:38:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:45.741 spare_delay 00:18:45.741 13:38:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.741 13:38:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:45.741 13:38:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.742 13:38:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:45.742 [2024-11-20 13:38:45.216424] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:45.742 [2024-11-20 13:38:45.216482] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:45.742 [2024-11-20 13:38:45.216503] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:18:45.742 [2024-11-20 13:38:45.216519] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:45.742 [2024-11-20 13:38:45.219003] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:45.742 [2024-11-20 13:38:45.219067] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:45.742 spare 00:18:45.742 13:38:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.742 13:38:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:18:45.742 13:38:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.742 13:38:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:46.000 [2024-11-20 13:38:45.228440] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:46.000 [2024-11-20 13:38:45.230740] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:46.000 [2024-11-20 13:38:45.230811] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:46.000 [2024-11-20 13:38:45.230866] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:18:46.000 [2024-11-20 13:38:45.230948] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:18:46.000 [2024-11-20 13:38:45.230964] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:18:46.000 [2024-11-20 13:38:45.231286] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:18:46.000 [2024-11-20 13:38:45.231468] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:18:46.000 [2024-11-20 13:38:45.231483] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:18:46.000 [2024-11-20 13:38:45.231654] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:46.000 13:38:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.000 13:38:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:18:46.000 13:38:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:46.000 13:38:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:46.000 13:38:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:46.000 13:38:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:46.000 13:38:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:46.000 13:38:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:46.000 13:38:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:46.000 13:38:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:46.000 13:38:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:46.000 13:38:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:46.000 13:38:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:46.000 13:38:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.000 13:38:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:46.000 13:38:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.000 13:38:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:46.000 "name": "raid_bdev1", 00:18:46.000 "uuid": "ec90cc6a-853e-42f5-942f-9a63be2293b6", 00:18:46.000 "strip_size_kb": 0, 00:18:46.000 "state": "online", 00:18:46.000 "raid_level": "raid1", 00:18:46.000 "superblock": false, 00:18:46.000 "num_base_bdevs": 4, 00:18:46.000 "num_base_bdevs_discovered": 4, 00:18:46.000 "num_base_bdevs_operational": 4, 00:18:46.000 "base_bdevs_list": [ 00:18:46.000 { 00:18:46.000 "name": "BaseBdev1", 00:18:46.000 "uuid": "b14ffd2d-563b-5a6c-97f4-3bff6eb35164", 00:18:46.000 "is_configured": true, 00:18:46.000 "data_offset": 0, 00:18:46.000 "data_size": 65536 00:18:46.000 }, 00:18:46.000 { 00:18:46.000 "name": "BaseBdev2", 00:18:46.000 "uuid": "21e61b04-af2f-5d63-82f2-f96c2f7603ab", 00:18:46.000 "is_configured": true, 00:18:46.000 "data_offset": 0, 00:18:46.000 "data_size": 65536 00:18:46.000 }, 00:18:46.000 { 00:18:46.000 "name": "BaseBdev3", 00:18:46.000 "uuid": "ebc6da13-0693-53ee-9df5-25c9e8338c57", 00:18:46.000 "is_configured": true, 00:18:46.000 "data_offset": 0, 00:18:46.000 "data_size": 65536 00:18:46.000 }, 00:18:46.000 { 00:18:46.000 "name": "BaseBdev4", 00:18:46.000 "uuid": "da7feaaf-0e89-5d2e-b526-1c30cbae9e3e", 00:18:46.000 "is_configured": true, 00:18:46.000 "data_offset": 0, 00:18:46.000 "data_size": 65536 00:18:46.000 } 00:18:46.000 ] 00:18:46.000 }' 00:18:46.000 13:38:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:46.000 13:38:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:46.259 13:38:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:46.259 13:38:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:18:46.259 13:38:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.259 13:38:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:46.259 [2024-11-20 13:38:45.656327] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:46.259 13:38:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.259 13:38:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:18:46.259 13:38:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:18:46.259 13:38:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:46.259 13:38:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.259 13:38:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:46.259 13:38:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.259 13:38:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:18:46.259 13:38:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:18:46.259 13:38:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:18:46.259 13:38:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:18:46.259 13:38:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:18:46.259 13:38:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:18:46.259 13:38:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:18:46.259 13:38:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:46.259 13:38:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:18:46.259 13:38:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:46.259 13:38:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:18:46.259 13:38:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:46.259 13:38:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:46.259 13:38:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:18:46.519 [2024-11-20 13:38:45.943620] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:18:46.519 /dev/nbd0 00:18:46.519 13:38:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:46.519 13:38:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:46.519 13:38:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:18:46.519 13:38:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:18:46.519 13:38:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:46.519 13:38:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:46.519 13:38:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:18:46.519 13:38:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:18:46.519 13:38:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:46.519 13:38:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:46.519 13:38:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:46.519 1+0 records in 00:18:46.519 1+0 records out 00:18:46.519 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00028023 s, 14.6 MB/s 00:18:46.519 13:38:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:46.519 13:38:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:18:46.519 13:38:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:46.777 13:38:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:46.777 13:38:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:18:46.777 13:38:46 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:46.777 13:38:46 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:46.777 13:38:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:18:46.777 13:38:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:18:46.777 13:38:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:18:54.890 65536+0 records in 00:18:54.890 65536+0 records out 00:18:54.890 33554432 bytes (34 MB, 32 MiB) copied, 7.174 s, 4.7 MB/s 00:18:54.890 13:38:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:18:54.890 13:38:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:18:54.890 13:38:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:18:54.890 13:38:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:54.890 13:38:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:18:54.890 13:38:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:54.890 13:38:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:18:54.890 [2024-11-20 13:38:53.420503] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:54.890 13:38:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:54.890 13:38:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:54.890 13:38:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:54.890 13:38:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:54.890 13:38:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:54.890 13:38:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:54.890 13:38:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:18:54.890 13:38:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:18:54.890 13:38:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:18:54.890 13:38:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.890 13:38:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:54.890 [2024-11-20 13:38:53.442957] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:54.890 13:38:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.890 13:38:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:18:54.890 13:38:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:54.890 13:38:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:54.890 13:38:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:54.890 13:38:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:54.890 13:38:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:54.890 13:38:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:54.890 13:38:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:54.890 13:38:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:54.890 13:38:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:54.890 13:38:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:54.890 13:38:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.890 13:38:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:54.890 13:38:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:54.890 13:38:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.890 13:38:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:54.890 "name": "raid_bdev1", 00:18:54.890 "uuid": "ec90cc6a-853e-42f5-942f-9a63be2293b6", 00:18:54.890 "strip_size_kb": 0, 00:18:54.890 "state": "online", 00:18:54.890 "raid_level": "raid1", 00:18:54.890 "superblock": false, 00:18:54.890 "num_base_bdevs": 4, 00:18:54.890 "num_base_bdevs_discovered": 3, 00:18:54.890 "num_base_bdevs_operational": 3, 00:18:54.890 "base_bdevs_list": [ 00:18:54.890 { 00:18:54.890 "name": null, 00:18:54.890 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:54.890 "is_configured": false, 00:18:54.890 "data_offset": 0, 00:18:54.891 "data_size": 65536 00:18:54.891 }, 00:18:54.891 { 00:18:54.891 "name": "BaseBdev2", 00:18:54.891 "uuid": "21e61b04-af2f-5d63-82f2-f96c2f7603ab", 00:18:54.891 "is_configured": true, 00:18:54.891 "data_offset": 0, 00:18:54.891 "data_size": 65536 00:18:54.891 }, 00:18:54.891 { 00:18:54.891 "name": "BaseBdev3", 00:18:54.891 "uuid": "ebc6da13-0693-53ee-9df5-25c9e8338c57", 00:18:54.891 "is_configured": true, 00:18:54.891 "data_offset": 0, 00:18:54.891 "data_size": 65536 00:18:54.891 }, 00:18:54.891 { 00:18:54.891 "name": "BaseBdev4", 00:18:54.891 "uuid": "da7feaaf-0e89-5d2e-b526-1c30cbae9e3e", 00:18:54.891 "is_configured": true, 00:18:54.891 "data_offset": 0, 00:18:54.891 "data_size": 65536 00:18:54.891 } 00:18:54.891 ] 00:18:54.891 }' 00:18:54.891 13:38:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:54.891 13:38:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:54.891 13:38:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:54.891 13:38:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.891 13:38:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:54.891 [2024-11-20 13:38:53.878425] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:54.891 [2024-11-20 13:38:53.893460] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09d70 00:18:54.891 13:38:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.891 13:38:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:18:54.891 [2024-11-20 13:38:53.895723] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:55.460 13:38:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:55.460 13:38:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:55.460 13:38:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:55.460 13:38:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:55.460 13:38:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:55.460 13:38:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:55.460 13:38:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:55.460 13:38:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.460 13:38:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:55.460 13:38:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.720 13:38:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:55.720 "name": "raid_bdev1", 00:18:55.720 "uuid": "ec90cc6a-853e-42f5-942f-9a63be2293b6", 00:18:55.720 "strip_size_kb": 0, 00:18:55.720 "state": "online", 00:18:55.720 "raid_level": "raid1", 00:18:55.720 "superblock": false, 00:18:55.720 "num_base_bdevs": 4, 00:18:55.720 "num_base_bdevs_discovered": 4, 00:18:55.720 "num_base_bdevs_operational": 4, 00:18:55.720 "process": { 00:18:55.720 "type": "rebuild", 00:18:55.720 "target": "spare", 00:18:55.720 "progress": { 00:18:55.720 "blocks": 20480, 00:18:55.720 "percent": 31 00:18:55.720 } 00:18:55.720 }, 00:18:55.720 "base_bdevs_list": [ 00:18:55.720 { 00:18:55.720 "name": "spare", 00:18:55.720 "uuid": "09b95732-044f-597c-983b-ccb91d6124e5", 00:18:55.720 "is_configured": true, 00:18:55.720 "data_offset": 0, 00:18:55.720 "data_size": 65536 00:18:55.720 }, 00:18:55.720 { 00:18:55.720 "name": "BaseBdev2", 00:18:55.720 "uuid": "21e61b04-af2f-5d63-82f2-f96c2f7603ab", 00:18:55.720 "is_configured": true, 00:18:55.720 "data_offset": 0, 00:18:55.720 "data_size": 65536 00:18:55.720 }, 00:18:55.720 { 00:18:55.720 "name": "BaseBdev3", 00:18:55.720 "uuid": "ebc6da13-0693-53ee-9df5-25c9e8338c57", 00:18:55.720 "is_configured": true, 00:18:55.720 "data_offset": 0, 00:18:55.720 "data_size": 65536 00:18:55.720 }, 00:18:55.720 { 00:18:55.720 "name": "BaseBdev4", 00:18:55.720 "uuid": "da7feaaf-0e89-5d2e-b526-1c30cbae9e3e", 00:18:55.720 "is_configured": true, 00:18:55.720 "data_offset": 0, 00:18:55.720 "data_size": 65536 00:18:55.720 } 00:18:55.720 ] 00:18:55.720 }' 00:18:55.720 13:38:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:55.720 13:38:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:55.720 13:38:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:55.720 13:38:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:55.720 13:38:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:18:55.720 13:38:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.720 13:38:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:55.720 [2024-11-20 13:38:55.046736] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:55.720 [2024-11-20 13:38:55.101584] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:55.720 [2024-11-20 13:38:55.101665] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:55.720 [2024-11-20 13:38:55.101687] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:55.720 [2024-11-20 13:38:55.101700] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:55.720 13:38:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.720 13:38:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:18:55.720 13:38:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:55.720 13:38:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:55.720 13:38:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:55.720 13:38:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:55.720 13:38:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:55.720 13:38:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:55.720 13:38:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:55.720 13:38:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:55.720 13:38:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:55.720 13:38:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:55.720 13:38:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:55.720 13:38:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.720 13:38:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:55.720 13:38:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.720 13:38:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:55.720 "name": "raid_bdev1", 00:18:55.720 "uuid": "ec90cc6a-853e-42f5-942f-9a63be2293b6", 00:18:55.720 "strip_size_kb": 0, 00:18:55.720 "state": "online", 00:18:55.720 "raid_level": "raid1", 00:18:55.720 "superblock": false, 00:18:55.720 "num_base_bdevs": 4, 00:18:55.720 "num_base_bdevs_discovered": 3, 00:18:55.720 "num_base_bdevs_operational": 3, 00:18:55.720 "base_bdevs_list": [ 00:18:55.720 { 00:18:55.720 "name": null, 00:18:55.720 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:55.720 "is_configured": false, 00:18:55.720 "data_offset": 0, 00:18:55.720 "data_size": 65536 00:18:55.720 }, 00:18:55.720 { 00:18:55.720 "name": "BaseBdev2", 00:18:55.720 "uuid": "21e61b04-af2f-5d63-82f2-f96c2f7603ab", 00:18:55.720 "is_configured": true, 00:18:55.720 "data_offset": 0, 00:18:55.720 "data_size": 65536 00:18:55.720 }, 00:18:55.720 { 00:18:55.720 "name": "BaseBdev3", 00:18:55.720 "uuid": "ebc6da13-0693-53ee-9df5-25c9e8338c57", 00:18:55.720 "is_configured": true, 00:18:55.720 "data_offset": 0, 00:18:55.720 "data_size": 65536 00:18:55.720 }, 00:18:55.720 { 00:18:55.720 "name": "BaseBdev4", 00:18:55.720 "uuid": "da7feaaf-0e89-5d2e-b526-1c30cbae9e3e", 00:18:55.720 "is_configured": true, 00:18:55.720 "data_offset": 0, 00:18:55.720 "data_size": 65536 00:18:55.720 } 00:18:55.720 ] 00:18:55.720 }' 00:18:55.720 13:38:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:55.720 13:38:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:56.288 13:38:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:56.288 13:38:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:56.288 13:38:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:56.288 13:38:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:56.288 13:38:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:56.288 13:38:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:56.288 13:38:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:56.288 13:38:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:56.288 13:38:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:56.288 13:38:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:56.288 13:38:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:56.288 "name": "raid_bdev1", 00:18:56.288 "uuid": "ec90cc6a-853e-42f5-942f-9a63be2293b6", 00:18:56.288 "strip_size_kb": 0, 00:18:56.288 "state": "online", 00:18:56.288 "raid_level": "raid1", 00:18:56.288 "superblock": false, 00:18:56.288 "num_base_bdevs": 4, 00:18:56.288 "num_base_bdevs_discovered": 3, 00:18:56.288 "num_base_bdevs_operational": 3, 00:18:56.288 "base_bdevs_list": [ 00:18:56.288 { 00:18:56.288 "name": null, 00:18:56.288 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:56.288 "is_configured": false, 00:18:56.288 "data_offset": 0, 00:18:56.288 "data_size": 65536 00:18:56.288 }, 00:18:56.288 { 00:18:56.288 "name": "BaseBdev2", 00:18:56.288 "uuid": "21e61b04-af2f-5d63-82f2-f96c2f7603ab", 00:18:56.288 "is_configured": true, 00:18:56.288 "data_offset": 0, 00:18:56.288 "data_size": 65536 00:18:56.288 }, 00:18:56.288 { 00:18:56.288 "name": "BaseBdev3", 00:18:56.288 "uuid": "ebc6da13-0693-53ee-9df5-25c9e8338c57", 00:18:56.288 "is_configured": true, 00:18:56.288 "data_offset": 0, 00:18:56.288 "data_size": 65536 00:18:56.288 }, 00:18:56.288 { 00:18:56.288 "name": "BaseBdev4", 00:18:56.288 "uuid": "da7feaaf-0e89-5d2e-b526-1c30cbae9e3e", 00:18:56.288 "is_configured": true, 00:18:56.288 "data_offset": 0, 00:18:56.288 "data_size": 65536 00:18:56.288 } 00:18:56.288 ] 00:18:56.288 }' 00:18:56.288 13:38:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:56.288 13:38:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:56.288 13:38:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:56.288 13:38:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:56.288 13:38:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:56.288 13:38:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:56.288 13:38:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:56.288 [2024-11-20 13:38:55.688706] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:56.288 [2024-11-20 13:38:55.703134] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09e40 00:18:56.288 13:38:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:56.288 13:38:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:18:56.288 [2024-11-20 13:38:55.705319] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:57.314 13:38:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:57.314 13:38:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:57.314 13:38:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:57.314 13:38:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:57.314 13:38:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:57.314 13:38:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:57.314 13:38:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.314 13:38:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:57.314 13:38:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:57.314 13:38:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.314 13:38:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:57.314 "name": "raid_bdev1", 00:18:57.314 "uuid": "ec90cc6a-853e-42f5-942f-9a63be2293b6", 00:18:57.314 "strip_size_kb": 0, 00:18:57.314 "state": "online", 00:18:57.314 "raid_level": "raid1", 00:18:57.314 "superblock": false, 00:18:57.314 "num_base_bdevs": 4, 00:18:57.314 "num_base_bdevs_discovered": 4, 00:18:57.314 "num_base_bdevs_operational": 4, 00:18:57.314 "process": { 00:18:57.314 "type": "rebuild", 00:18:57.314 "target": "spare", 00:18:57.314 "progress": { 00:18:57.314 "blocks": 20480, 00:18:57.314 "percent": 31 00:18:57.315 } 00:18:57.315 }, 00:18:57.315 "base_bdevs_list": [ 00:18:57.315 { 00:18:57.315 "name": "spare", 00:18:57.315 "uuid": "09b95732-044f-597c-983b-ccb91d6124e5", 00:18:57.315 "is_configured": true, 00:18:57.315 "data_offset": 0, 00:18:57.315 "data_size": 65536 00:18:57.315 }, 00:18:57.315 { 00:18:57.315 "name": "BaseBdev2", 00:18:57.315 "uuid": "21e61b04-af2f-5d63-82f2-f96c2f7603ab", 00:18:57.315 "is_configured": true, 00:18:57.315 "data_offset": 0, 00:18:57.315 "data_size": 65536 00:18:57.315 }, 00:18:57.315 { 00:18:57.315 "name": "BaseBdev3", 00:18:57.315 "uuid": "ebc6da13-0693-53ee-9df5-25c9e8338c57", 00:18:57.315 "is_configured": true, 00:18:57.315 "data_offset": 0, 00:18:57.315 "data_size": 65536 00:18:57.315 }, 00:18:57.315 { 00:18:57.315 "name": "BaseBdev4", 00:18:57.315 "uuid": "da7feaaf-0e89-5d2e-b526-1c30cbae9e3e", 00:18:57.315 "is_configured": true, 00:18:57.315 "data_offset": 0, 00:18:57.315 "data_size": 65536 00:18:57.315 } 00:18:57.315 ] 00:18:57.315 }' 00:18:57.315 13:38:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:57.315 13:38:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:57.315 13:38:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:57.572 13:38:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:57.572 13:38:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:18:57.572 13:38:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:18:57.572 13:38:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:18:57.572 13:38:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:18:57.572 13:38:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:18:57.572 13:38:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.572 13:38:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:57.572 [2024-11-20 13:38:56.849168] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:57.572 [2024-11-20 13:38:56.911208] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000d09e40 00:18:57.572 13:38:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.573 13:38:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:18:57.573 13:38:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:18:57.573 13:38:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:57.573 13:38:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:57.573 13:38:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:57.573 13:38:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:57.573 13:38:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:57.573 13:38:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:57.573 13:38:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:57.573 13:38:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.573 13:38:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:57.573 13:38:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.573 13:38:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:57.573 "name": "raid_bdev1", 00:18:57.573 "uuid": "ec90cc6a-853e-42f5-942f-9a63be2293b6", 00:18:57.573 "strip_size_kb": 0, 00:18:57.573 "state": "online", 00:18:57.573 "raid_level": "raid1", 00:18:57.573 "superblock": false, 00:18:57.573 "num_base_bdevs": 4, 00:18:57.573 "num_base_bdevs_discovered": 3, 00:18:57.573 "num_base_bdevs_operational": 3, 00:18:57.573 "process": { 00:18:57.573 "type": "rebuild", 00:18:57.573 "target": "spare", 00:18:57.573 "progress": { 00:18:57.573 "blocks": 24576, 00:18:57.573 "percent": 37 00:18:57.573 } 00:18:57.573 }, 00:18:57.573 "base_bdevs_list": [ 00:18:57.573 { 00:18:57.573 "name": "spare", 00:18:57.573 "uuid": "09b95732-044f-597c-983b-ccb91d6124e5", 00:18:57.573 "is_configured": true, 00:18:57.573 "data_offset": 0, 00:18:57.573 "data_size": 65536 00:18:57.573 }, 00:18:57.573 { 00:18:57.573 "name": null, 00:18:57.573 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:57.573 "is_configured": false, 00:18:57.573 "data_offset": 0, 00:18:57.573 "data_size": 65536 00:18:57.573 }, 00:18:57.573 { 00:18:57.573 "name": "BaseBdev3", 00:18:57.573 "uuid": "ebc6da13-0693-53ee-9df5-25c9e8338c57", 00:18:57.573 "is_configured": true, 00:18:57.573 "data_offset": 0, 00:18:57.573 "data_size": 65536 00:18:57.573 }, 00:18:57.573 { 00:18:57.573 "name": "BaseBdev4", 00:18:57.573 "uuid": "da7feaaf-0e89-5d2e-b526-1c30cbae9e3e", 00:18:57.573 "is_configured": true, 00:18:57.573 "data_offset": 0, 00:18:57.573 "data_size": 65536 00:18:57.573 } 00:18:57.573 ] 00:18:57.573 }' 00:18:57.573 13:38:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:57.573 13:38:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:57.573 13:38:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:57.573 13:38:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:57.573 13:38:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=445 00:18:57.573 13:38:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:57.573 13:38:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:57.573 13:38:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:57.573 13:38:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:57.573 13:38:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:57.573 13:38:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:57.573 13:38:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:57.573 13:38:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:57.573 13:38:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.831 13:38:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:57.831 13:38:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.831 13:38:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:57.831 "name": "raid_bdev1", 00:18:57.831 "uuid": "ec90cc6a-853e-42f5-942f-9a63be2293b6", 00:18:57.831 "strip_size_kb": 0, 00:18:57.831 "state": "online", 00:18:57.831 "raid_level": "raid1", 00:18:57.831 "superblock": false, 00:18:57.831 "num_base_bdevs": 4, 00:18:57.831 "num_base_bdevs_discovered": 3, 00:18:57.831 "num_base_bdevs_operational": 3, 00:18:57.831 "process": { 00:18:57.831 "type": "rebuild", 00:18:57.831 "target": "spare", 00:18:57.831 "progress": { 00:18:57.831 "blocks": 26624, 00:18:57.831 "percent": 40 00:18:57.831 } 00:18:57.831 }, 00:18:57.831 "base_bdevs_list": [ 00:18:57.831 { 00:18:57.831 "name": "spare", 00:18:57.831 "uuid": "09b95732-044f-597c-983b-ccb91d6124e5", 00:18:57.831 "is_configured": true, 00:18:57.831 "data_offset": 0, 00:18:57.831 "data_size": 65536 00:18:57.831 }, 00:18:57.831 { 00:18:57.831 "name": null, 00:18:57.831 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:57.831 "is_configured": false, 00:18:57.831 "data_offset": 0, 00:18:57.831 "data_size": 65536 00:18:57.831 }, 00:18:57.831 { 00:18:57.831 "name": "BaseBdev3", 00:18:57.831 "uuid": "ebc6da13-0693-53ee-9df5-25c9e8338c57", 00:18:57.831 "is_configured": true, 00:18:57.831 "data_offset": 0, 00:18:57.831 "data_size": 65536 00:18:57.831 }, 00:18:57.831 { 00:18:57.831 "name": "BaseBdev4", 00:18:57.831 "uuid": "da7feaaf-0e89-5d2e-b526-1c30cbae9e3e", 00:18:57.831 "is_configured": true, 00:18:57.831 "data_offset": 0, 00:18:57.831 "data_size": 65536 00:18:57.831 } 00:18:57.831 ] 00:18:57.831 }' 00:18:57.831 13:38:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:57.831 13:38:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:57.831 13:38:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:57.831 13:38:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:57.831 13:38:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:58.767 13:38:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:58.767 13:38:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:58.767 13:38:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:58.767 13:38:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:58.767 13:38:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:58.767 13:38:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:58.767 13:38:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:58.767 13:38:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:58.767 13:38:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.767 13:38:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:58.767 13:38:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.767 13:38:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:58.767 "name": "raid_bdev1", 00:18:58.767 "uuid": "ec90cc6a-853e-42f5-942f-9a63be2293b6", 00:18:58.767 "strip_size_kb": 0, 00:18:58.767 "state": "online", 00:18:58.767 "raid_level": "raid1", 00:18:58.767 "superblock": false, 00:18:58.767 "num_base_bdevs": 4, 00:18:58.767 "num_base_bdevs_discovered": 3, 00:18:58.767 "num_base_bdevs_operational": 3, 00:18:58.767 "process": { 00:18:58.767 "type": "rebuild", 00:18:58.767 "target": "spare", 00:18:58.767 "progress": { 00:18:58.767 "blocks": 49152, 00:18:58.767 "percent": 75 00:18:58.767 } 00:18:58.767 }, 00:18:58.767 "base_bdevs_list": [ 00:18:58.767 { 00:18:58.767 "name": "spare", 00:18:58.767 "uuid": "09b95732-044f-597c-983b-ccb91d6124e5", 00:18:58.767 "is_configured": true, 00:18:58.767 "data_offset": 0, 00:18:58.767 "data_size": 65536 00:18:58.767 }, 00:18:58.767 { 00:18:58.767 "name": null, 00:18:58.767 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:58.767 "is_configured": false, 00:18:58.767 "data_offset": 0, 00:18:58.767 "data_size": 65536 00:18:58.767 }, 00:18:58.767 { 00:18:58.767 "name": "BaseBdev3", 00:18:58.767 "uuid": "ebc6da13-0693-53ee-9df5-25c9e8338c57", 00:18:58.767 "is_configured": true, 00:18:58.767 "data_offset": 0, 00:18:58.767 "data_size": 65536 00:18:58.767 }, 00:18:58.767 { 00:18:58.767 "name": "BaseBdev4", 00:18:58.767 "uuid": "da7feaaf-0e89-5d2e-b526-1c30cbae9e3e", 00:18:58.767 "is_configured": true, 00:18:58.767 "data_offset": 0, 00:18:58.767 "data_size": 65536 00:18:58.767 } 00:18:58.767 ] 00:18:58.767 }' 00:18:58.767 13:38:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:59.026 13:38:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:59.026 13:38:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:59.026 13:38:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:59.026 13:38:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:59.593 [2024-11-20 13:38:58.920491] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:18:59.593 [2024-11-20 13:38:58.920595] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:18:59.593 [2024-11-20 13:38:58.920648] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:59.876 13:38:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:59.876 13:38:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:59.876 13:38:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:59.876 13:38:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:59.876 13:38:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:59.876 13:38:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:59.876 13:38:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:59.876 13:38:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:59.876 13:38:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.876 13:38:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:59.876 13:38:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:00.134 13:38:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:00.134 "name": "raid_bdev1", 00:19:00.134 "uuid": "ec90cc6a-853e-42f5-942f-9a63be2293b6", 00:19:00.134 "strip_size_kb": 0, 00:19:00.134 "state": "online", 00:19:00.134 "raid_level": "raid1", 00:19:00.134 "superblock": false, 00:19:00.134 "num_base_bdevs": 4, 00:19:00.134 "num_base_bdevs_discovered": 3, 00:19:00.134 "num_base_bdevs_operational": 3, 00:19:00.134 "base_bdevs_list": [ 00:19:00.134 { 00:19:00.134 "name": "spare", 00:19:00.134 "uuid": "09b95732-044f-597c-983b-ccb91d6124e5", 00:19:00.134 "is_configured": true, 00:19:00.134 "data_offset": 0, 00:19:00.134 "data_size": 65536 00:19:00.134 }, 00:19:00.134 { 00:19:00.134 "name": null, 00:19:00.134 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:00.134 "is_configured": false, 00:19:00.134 "data_offset": 0, 00:19:00.134 "data_size": 65536 00:19:00.134 }, 00:19:00.134 { 00:19:00.134 "name": "BaseBdev3", 00:19:00.134 "uuid": "ebc6da13-0693-53ee-9df5-25c9e8338c57", 00:19:00.134 "is_configured": true, 00:19:00.134 "data_offset": 0, 00:19:00.134 "data_size": 65536 00:19:00.134 }, 00:19:00.134 { 00:19:00.134 "name": "BaseBdev4", 00:19:00.134 "uuid": "da7feaaf-0e89-5d2e-b526-1c30cbae9e3e", 00:19:00.134 "is_configured": true, 00:19:00.134 "data_offset": 0, 00:19:00.134 "data_size": 65536 00:19:00.134 } 00:19:00.134 ] 00:19:00.134 }' 00:19:00.134 13:38:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:00.134 13:38:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:19:00.134 13:38:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:00.134 13:38:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:19:00.134 13:38:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:19:00.134 13:38:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:00.134 13:38:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:00.134 13:38:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:00.134 13:38:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:00.134 13:38:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:00.134 13:38:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:00.134 13:38:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.134 13:38:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:00.134 13:38:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:00.134 13:38:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:00.134 13:38:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:00.134 "name": "raid_bdev1", 00:19:00.134 "uuid": "ec90cc6a-853e-42f5-942f-9a63be2293b6", 00:19:00.134 "strip_size_kb": 0, 00:19:00.134 "state": "online", 00:19:00.134 "raid_level": "raid1", 00:19:00.134 "superblock": false, 00:19:00.134 "num_base_bdevs": 4, 00:19:00.134 "num_base_bdevs_discovered": 3, 00:19:00.134 "num_base_bdevs_operational": 3, 00:19:00.134 "base_bdevs_list": [ 00:19:00.134 { 00:19:00.134 "name": "spare", 00:19:00.134 "uuid": "09b95732-044f-597c-983b-ccb91d6124e5", 00:19:00.134 "is_configured": true, 00:19:00.134 "data_offset": 0, 00:19:00.134 "data_size": 65536 00:19:00.134 }, 00:19:00.134 { 00:19:00.134 "name": null, 00:19:00.134 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:00.134 "is_configured": false, 00:19:00.134 "data_offset": 0, 00:19:00.134 "data_size": 65536 00:19:00.134 }, 00:19:00.134 { 00:19:00.134 "name": "BaseBdev3", 00:19:00.134 "uuid": "ebc6da13-0693-53ee-9df5-25c9e8338c57", 00:19:00.134 "is_configured": true, 00:19:00.134 "data_offset": 0, 00:19:00.134 "data_size": 65536 00:19:00.134 }, 00:19:00.134 { 00:19:00.134 "name": "BaseBdev4", 00:19:00.134 "uuid": "da7feaaf-0e89-5d2e-b526-1c30cbae9e3e", 00:19:00.134 "is_configured": true, 00:19:00.134 "data_offset": 0, 00:19:00.134 "data_size": 65536 00:19:00.134 } 00:19:00.134 ] 00:19:00.134 }' 00:19:00.134 13:38:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:00.134 13:38:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:00.134 13:38:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:00.134 13:38:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:00.134 13:38:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:19:00.134 13:38:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:00.134 13:38:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:00.134 13:38:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:00.134 13:38:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:00.134 13:38:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:00.134 13:38:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:00.134 13:38:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:00.134 13:38:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:00.134 13:38:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:00.134 13:38:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:00.134 13:38:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:00.134 13:38:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.134 13:38:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:00.392 13:38:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:00.392 13:38:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:00.392 "name": "raid_bdev1", 00:19:00.392 "uuid": "ec90cc6a-853e-42f5-942f-9a63be2293b6", 00:19:00.392 "strip_size_kb": 0, 00:19:00.392 "state": "online", 00:19:00.392 "raid_level": "raid1", 00:19:00.392 "superblock": false, 00:19:00.392 "num_base_bdevs": 4, 00:19:00.392 "num_base_bdevs_discovered": 3, 00:19:00.392 "num_base_bdevs_operational": 3, 00:19:00.392 "base_bdevs_list": [ 00:19:00.392 { 00:19:00.392 "name": "spare", 00:19:00.392 "uuid": "09b95732-044f-597c-983b-ccb91d6124e5", 00:19:00.392 "is_configured": true, 00:19:00.392 "data_offset": 0, 00:19:00.392 "data_size": 65536 00:19:00.392 }, 00:19:00.392 { 00:19:00.392 "name": null, 00:19:00.392 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:00.392 "is_configured": false, 00:19:00.392 "data_offset": 0, 00:19:00.392 "data_size": 65536 00:19:00.392 }, 00:19:00.392 { 00:19:00.392 "name": "BaseBdev3", 00:19:00.392 "uuid": "ebc6da13-0693-53ee-9df5-25c9e8338c57", 00:19:00.392 "is_configured": true, 00:19:00.392 "data_offset": 0, 00:19:00.392 "data_size": 65536 00:19:00.392 }, 00:19:00.392 { 00:19:00.392 "name": "BaseBdev4", 00:19:00.392 "uuid": "da7feaaf-0e89-5d2e-b526-1c30cbae9e3e", 00:19:00.392 "is_configured": true, 00:19:00.392 "data_offset": 0, 00:19:00.392 "data_size": 65536 00:19:00.392 } 00:19:00.392 ] 00:19:00.392 }' 00:19:00.392 13:38:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:00.392 13:38:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:00.650 13:38:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:00.650 13:38:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.650 13:38:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:00.650 [2024-11-20 13:38:59.987525] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:00.650 [2024-11-20 13:38:59.987562] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:00.650 [2024-11-20 13:38:59.987653] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:00.650 [2024-11-20 13:38:59.987736] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:00.650 [2024-11-20 13:38:59.987749] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:19:00.650 13:38:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:00.650 13:38:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:00.650 13:38:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.650 13:38:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:00.650 13:38:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:19:00.650 13:39:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:00.650 13:39:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:19:00.650 13:39:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:19:00.650 13:39:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:19:00.650 13:39:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:19:00.650 13:39:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:19:00.650 13:39:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:19:00.650 13:39:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:00.650 13:39:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:19:00.650 13:39:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:00.650 13:39:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:19:00.650 13:39:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:00.650 13:39:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:00.650 13:39:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:19:00.939 /dev/nbd0 00:19:00.939 13:39:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:00.939 13:39:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:00.939 13:39:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:19:00.939 13:39:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:19:00.939 13:39:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:00.939 13:39:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:00.939 13:39:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:19:00.939 13:39:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:19:00.939 13:39:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:00.939 13:39:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:00.939 13:39:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:00.939 1+0 records in 00:19:00.939 1+0 records out 00:19:00.939 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000412877 s, 9.9 MB/s 00:19:00.939 13:39:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:00.939 13:39:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:19:00.939 13:39:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:00.939 13:39:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:00.939 13:39:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:19:00.939 13:39:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:00.939 13:39:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:00.939 13:39:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:19:01.198 /dev/nbd1 00:19:01.198 13:39:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:19:01.198 13:39:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:19:01.198 13:39:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:19:01.198 13:39:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:19:01.198 13:39:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:01.198 13:39:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:01.198 13:39:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:19:01.198 13:39:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:19:01.198 13:39:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:01.198 13:39:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:01.198 13:39:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:01.198 1+0 records in 00:19:01.198 1+0 records out 00:19:01.198 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000454226 s, 9.0 MB/s 00:19:01.198 13:39:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:01.198 13:39:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:19:01.198 13:39:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:01.198 13:39:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:01.198 13:39:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:19:01.198 13:39:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:01.198 13:39:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:01.198 13:39:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:19:01.456 13:39:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:19:01.457 13:39:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:19:01.457 13:39:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:19:01.457 13:39:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:01.457 13:39:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:19:01.457 13:39:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:01.457 13:39:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:19:01.716 13:39:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:01.716 13:39:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:01.716 13:39:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:01.716 13:39:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:01.716 13:39:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:01.716 13:39:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:01.716 13:39:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:19:01.716 13:39:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:19:01.716 13:39:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:01.716 13:39:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:19:01.976 13:39:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:19:01.976 13:39:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:19:01.976 13:39:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:19:01.976 13:39:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:01.977 13:39:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:01.977 13:39:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:19:01.977 13:39:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:19:01.977 13:39:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:19:01.977 13:39:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:19:01.977 13:39:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 77324 00:19:01.977 13:39:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 77324 ']' 00:19:01.977 13:39:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 77324 00:19:01.977 13:39:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:19:01.977 13:39:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:01.977 13:39:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77324 00:19:01.977 13:39:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:01.977 killing process with pid 77324 00:19:01.977 Received shutdown signal, test time was about 60.000000 seconds 00:19:01.977 00:19:01.977 Latency(us) 00:19:01.977 [2024-11-20T13:39:01.462Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:01.977 [2024-11-20T13:39:01.462Z] =================================================================================================================== 00:19:01.977 [2024-11-20T13:39:01.462Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:01.977 13:39:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:01.977 13:39:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77324' 00:19:01.977 13:39:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@973 -- # kill 77324 00:19:01.977 [2024-11-20 13:39:01.388347] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:01.977 13:39:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@978 -- # wait 77324 00:19:02.544 [2024-11-20 13:39:01.926222] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:03.919 13:39:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:19:03.919 00:19:03.919 real 0m19.204s 00:19:03.919 user 0m20.852s 00:19:03.919 sys 0m4.203s 00:19:03.919 ************************************ 00:19:03.919 END TEST raid_rebuild_test 00:19:03.919 ************************************ 00:19:03.919 13:39:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:03.919 13:39:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:03.919 13:39:03 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 4 true false true 00:19:03.919 13:39:03 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:19:03.919 13:39:03 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:03.919 13:39:03 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:03.919 ************************************ 00:19:03.919 START TEST raid_rebuild_test_sb 00:19:03.919 ************************************ 00:19:03.919 13:39:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 true false true 00:19:03.919 13:39:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:19:03.919 13:39:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:19:03.919 13:39:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:19:03.919 13:39:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:19:03.919 13:39:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:19:03.919 13:39:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:19:03.919 13:39:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:03.919 13:39:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:19:03.919 13:39:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:19:03.919 13:39:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:03.919 13:39:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:19:03.919 13:39:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:19:03.919 13:39:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:03.919 13:39:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:19:03.919 13:39:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:19:03.919 13:39:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:03.919 13:39:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:19:03.919 13:39:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:19:03.919 13:39:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:03.919 13:39:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:19:03.919 13:39:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:19:03.919 13:39:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:19:03.919 13:39:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:19:03.919 13:39:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:19:03.919 13:39:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:19:03.919 13:39:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:19:03.919 13:39:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:19:03.919 13:39:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:19:03.919 13:39:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:19:03.919 13:39:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:19:03.919 13:39:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=77787 00:19:03.919 13:39:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:19:03.919 13:39:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 77787 00:19:03.919 13:39:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 77787 ']' 00:19:03.919 13:39:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:03.919 13:39:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:03.919 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:03.919 13:39:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:03.919 13:39:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:03.919 13:39:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:03.919 [2024-11-20 13:39:03.336141] Starting SPDK v25.01-pre git sha1 82b85d9ca / DPDK 24.03.0 initialization... 00:19:03.919 [2024-11-20 13:39:03.336463] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:19:03.919 Zero copy mechanism will not be used. 00:19:03.919 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77787 ] 00:19:04.179 [2024-11-20 13:39:03.506966] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:04.179 [2024-11-20 13:39:03.632851] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:04.440 [2024-11-20 13:39:03.850153] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:04.440 [2024-11-20 13:39:03.850431] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:05.008 13:39:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:05.008 13:39:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:19:05.008 13:39:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:05.008 13:39:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:19:05.008 13:39:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:05.008 13:39:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:05.008 BaseBdev1_malloc 00:19:05.008 13:39:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:05.008 13:39:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:19:05.008 13:39:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:05.008 13:39:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:05.008 [2024-11-20 13:39:04.284905] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:19:05.008 [2024-11-20 13:39:04.284980] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:05.008 [2024-11-20 13:39:04.285006] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:19:05.008 [2024-11-20 13:39:04.285024] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:05.008 [2024-11-20 13:39:04.287564] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:05.008 [2024-11-20 13:39:04.287615] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:05.008 BaseBdev1 00:19:05.008 13:39:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:05.008 13:39:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:05.008 13:39:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:19:05.008 13:39:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:05.008 13:39:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:05.008 BaseBdev2_malloc 00:19:05.008 13:39:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:05.008 13:39:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:19:05.008 13:39:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:05.008 13:39:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:05.008 [2024-11-20 13:39:04.338746] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:19:05.008 [2024-11-20 13:39:04.338984] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:05.008 [2024-11-20 13:39:04.339024] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:19:05.008 [2024-11-20 13:39:04.339043] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:05.008 [2024-11-20 13:39:04.341717] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:05.008 [2024-11-20 13:39:04.341768] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:19:05.008 BaseBdev2 00:19:05.008 13:39:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:05.008 13:39:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:05.008 13:39:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:19:05.008 13:39:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:05.008 13:39:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:05.008 BaseBdev3_malloc 00:19:05.008 13:39:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:05.008 13:39:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:19:05.008 13:39:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:05.008 13:39:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:05.008 [2024-11-20 13:39:04.402946] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:19:05.008 [2024-11-20 13:39:04.403018] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:05.008 [2024-11-20 13:39:04.403046] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:19:05.008 [2024-11-20 13:39:04.403075] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:05.008 [2024-11-20 13:39:04.405594] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:05.009 [2024-11-20 13:39:04.405768] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:19:05.009 BaseBdev3 00:19:05.009 13:39:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:05.009 13:39:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:05.009 13:39:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:19:05.009 13:39:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:05.009 13:39:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:05.009 BaseBdev4_malloc 00:19:05.009 13:39:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:05.009 13:39:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:19:05.009 13:39:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:05.009 13:39:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:05.009 [2024-11-20 13:39:04.461510] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:19:05.009 [2024-11-20 13:39:04.461602] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:05.009 [2024-11-20 13:39:04.461628] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:19:05.009 [2024-11-20 13:39:04.461645] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:05.009 [2024-11-20 13:39:04.464273] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:05.009 [2024-11-20 13:39:04.464338] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:19:05.009 BaseBdev4 00:19:05.009 13:39:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:05.009 13:39:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:19:05.009 13:39:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:05.009 13:39:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:05.267 spare_malloc 00:19:05.267 13:39:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:05.267 13:39:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:19:05.267 13:39:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:05.267 13:39:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:05.267 spare_delay 00:19:05.268 13:39:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:05.268 13:39:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:19:05.268 13:39:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:05.268 13:39:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:05.268 [2024-11-20 13:39:04.529328] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:05.268 [2024-11-20 13:39:04.529397] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:05.268 [2024-11-20 13:39:04.529422] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:19:05.268 [2024-11-20 13:39:04.529439] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:05.268 [2024-11-20 13:39:04.532043] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:05.268 [2024-11-20 13:39:04.532255] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:05.268 spare 00:19:05.268 13:39:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:05.268 13:39:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:19:05.268 13:39:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:05.268 13:39:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:05.268 [2024-11-20 13:39:04.541365] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:05.268 [2024-11-20 13:39:04.543638] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:05.268 [2024-11-20 13:39:04.543852] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:05.268 [2024-11-20 13:39:04.543931] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:19:05.268 [2024-11-20 13:39:04.544155] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:19:05.268 [2024-11-20 13:39:04.544174] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:19:05.268 [2024-11-20 13:39:04.544489] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:19:05.268 [2024-11-20 13:39:04.544718] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:19:05.268 [2024-11-20 13:39:04.544731] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:19:05.268 [2024-11-20 13:39:04.544924] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:05.268 13:39:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:05.268 13:39:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:19:05.268 13:39:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:05.268 13:39:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:05.268 13:39:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:05.268 13:39:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:05.268 13:39:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:05.268 13:39:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:05.268 13:39:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:05.268 13:39:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:05.268 13:39:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:05.268 13:39:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:05.268 13:39:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:05.268 13:39:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:05.268 13:39:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:05.268 13:39:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:05.268 13:39:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:05.268 "name": "raid_bdev1", 00:19:05.268 "uuid": "7216608b-2e23-465d-ac44-ba3e9a6480e1", 00:19:05.268 "strip_size_kb": 0, 00:19:05.268 "state": "online", 00:19:05.268 "raid_level": "raid1", 00:19:05.268 "superblock": true, 00:19:05.268 "num_base_bdevs": 4, 00:19:05.268 "num_base_bdevs_discovered": 4, 00:19:05.268 "num_base_bdevs_operational": 4, 00:19:05.268 "base_bdevs_list": [ 00:19:05.268 { 00:19:05.268 "name": "BaseBdev1", 00:19:05.268 "uuid": "f08af0a4-ad4c-5d6d-b6c7-a96251210945", 00:19:05.268 "is_configured": true, 00:19:05.268 "data_offset": 2048, 00:19:05.268 "data_size": 63488 00:19:05.268 }, 00:19:05.268 { 00:19:05.268 "name": "BaseBdev2", 00:19:05.268 "uuid": "fcddc976-313b-5679-bb68-fd0bf93b2653", 00:19:05.268 "is_configured": true, 00:19:05.268 "data_offset": 2048, 00:19:05.268 "data_size": 63488 00:19:05.268 }, 00:19:05.268 { 00:19:05.268 "name": "BaseBdev3", 00:19:05.268 "uuid": "24d07dd7-f78a-5198-ad8d-dcfaba693d28", 00:19:05.268 "is_configured": true, 00:19:05.268 "data_offset": 2048, 00:19:05.268 "data_size": 63488 00:19:05.268 }, 00:19:05.268 { 00:19:05.268 "name": "BaseBdev4", 00:19:05.268 "uuid": "4ad41955-09ee-5e7d-942f-e09a5455b1ce", 00:19:05.268 "is_configured": true, 00:19:05.268 "data_offset": 2048, 00:19:05.268 "data_size": 63488 00:19:05.268 } 00:19:05.268 ] 00:19:05.268 }' 00:19:05.268 13:39:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:05.268 13:39:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:05.527 13:39:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:05.527 13:39:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:05.527 13:39:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:19:05.527 13:39:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:05.527 [2024-11-20 13:39:04.957180] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:05.527 13:39:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:05.527 13:39:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:19:05.527 13:39:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:05.527 13:39:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:19:05.527 13:39:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:05.527 13:39:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:05.786 13:39:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:05.786 13:39:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:19:05.786 13:39:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:19:05.786 13:39:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:19:05.786 13:39:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:19:05.786 13:39:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:19:05.786 13:39:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:19:05.786 13:39:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:19:05.786 13:39:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:05.786 13:39:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:19:05.786 13:39:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:05.786 13:39:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:19:05.786 13:39:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:05.786 13:39:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:05.786 13:39:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:19:05.786 [2024-11-20 13:39:05.260480] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:19:06.045 /dev/nbd0 00:19:06.045 13:39:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:06.045 13:39:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:06.045 13:39:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:19:06.045 13:39:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:19:06.045 13:39:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:06.045 13:39:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:06.045 13:39:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:19:06.045 13:39:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:19:06.045 13:39:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:06.045 13:39:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:06.045 13:39:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:06.045 1+0 records in 00:19:06.045 1+0 records out 00:19:06.045 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000343977 s, 11.9 MB/s 00:19:06.045 13:39:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:06.045 13:39:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:19:06.045 13:39:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:06.045 13:39:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:06.045 13:39:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:19:06.045 13:39:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:06.045 13:39:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:06.045 13:39:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:19:06.045 13:39:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:19:06.045 13:39:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:19:12.605 63488+0 records in 00:19:12.605 63488+0 records out 00:19:12.605 32505856 bytes (33 MB, 31 MiB) copied, 6.63636 s, 4.9 MB/s 00:19:12.605 13:39:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:19:12.605 13:39:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:19:12.605 13:39:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:19:12.605 13:39:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:12.605 13:39:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:19:12.605 13:39:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:12.605 13:39:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:19:12.863 [2024-11-20 13:39:12.190476] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:12.863 13:39:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:12.863 13:39:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:12.863 13:39:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:12.863 13:39:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:12.863 13:39:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:12.863 13:39:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:12.863 13:39:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:19:12.863 13:39:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:19:12.863 13:39:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:19:12.863 13:39:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.863 13:39:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:12.863 [2024-11-20 13:39:12.222520] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:12.863 13:39:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.863 13:39:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:19:12.863 13:39:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:12.863 13:39:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:12.863 13:39:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:12.863 13:39:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:12.863 13:39:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:12.863 13:39:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:12.863 13:39:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:12.863 13:39:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:12.863 13:39:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:12.863 13:39:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:12.863 13:39:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:12.863 13:39:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.863 13:39:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:12.863 13:39:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.863 13:39:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:12.863 "name": "raid_bdev1", 00:19:12.863 "uuid": "7216608b-2e23-465d-ac44-ba3e9a6480e1", 00:19:12.863 "strip_size_kb": 0, 00:19:12.863 "state": "online", 00:19:12.863 "raid_level": "raid1", 00:19:12.863 "superblock": true, 00:19:12.863 "num_base_bdevs": 4, 00:19:12.863 "num_base_bdevs_discovered": 3, 00:19:12.863 "num_base_bdevs_operational": 3, 00:19:12.863 "base_bdevs_list": [ 00:19:12.863 { 00:19:12.863 "name": null, 00:19:12.863 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:12.863 "is_configured": false, 00:19:12.863 "data_offset": 0, 00:19:12.863 "data_size": 63488 00:19:12.863 }, 00:19:12.863 { 00:19:12.863 "name": "BaseBdev2", 00:19:12.863 "uuid": "fcddc976-313b-5679-bb68-fd0bf93b2653", 00:19:12.863 "is_configured": true, 00:19:12.863 "data_offset": 2048, 00:19:12.863 "data_size": 63488 00:19:12.863 }, 00:19:12.863 { 00:19:12.863 "name": "BaseBdev3", 00:19:12.863 "uuid": "24d07dd7-f78a-5198-ad8d-dcfaba693d28", 00:19:12.863 "is_configured": true, 00:19:12.863 "data_offset": 2048, 00:19:12.863 "data_size": 63488 00:19:12.863 }, 00:19:12.863 { 00:19:12.863 "name": "BaseBdev4", 00:19:12.863 "uuid": "4ad41955-09ee-5e7d-942f-e09a5455b1ce", 00:19:12.863 "is_configured": true, 00:19:12.863 "data_offset": 2048, 00:19:12.863 "data_size": 63488 00:19:12.863 } 00:19:12.863 ] 00:19:12.863 }' 00:19:12.863 13:39:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:12.863 13:39:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:13.121 13:39:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:13.121 13:39:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.121 13:39:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:13.379 [2024-11-20 13:39:12.606437] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:13.379 [2024-11-20 13:39:12.624559] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3500 00:19:13.379 13:39:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.379 13:39:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:19:13.379 [2024-11-20 13:39:12.626882] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:14.314 13:39:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:14.314 13:39:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:14.314 13:39:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:14.314 13:39:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:14.314 13:39:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:14.314 13:39:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:14.314 13:39:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.314 13:39:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:14.314 13:39:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:14.314 13:39:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.314 13:39:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:14.314 "name": "raid_bdev1", 00:19:14.314 "uuid": "7216608b-2e23-465d-ac44-ba3e9a6480e1", 00:19:14.314 "strip_size_kb": 0, 00:19:14.314 "state": "online", 00:19:14.314 "raid_level": "raid1", 00:19:14.314 "superblock": true, 00:19:14.314 "num_base_bdevs": 4, 00:19:14.314 "num_base_bdevs_discovered": 4, 00:19:14.314 "num_base_bdevs_operational": 4, 00:19:14.314 "process": { 00:19:14.314 "type": "rebuild", 00:19:14.314 "target": "spare", 00:19:14.314 "progress": { 00:19:14.314 "blocks": 20480, 00:19:14.314 "percent": 32 00:19:14.314 } 00:19:14.314 }, 00:19:14.314 "base_bdevs_list": [ 00:19:14.314 { 00:19:14.314 "name": "spare", 00:19:14.314 "uuid": "023b668c-990c-5ddf-8a1d-e355fdcbb404", 00:19:14.314 "is_configured": true, 00:19:14.314 "data_offset": 2048, 00:19:14.314 "data_size": 63488 00:19:14.314 }, 00:19:14.314 { 00:19:14.314 "name": "BaseBdev2", 00:19:14.314 "uuid": "fcddc976-313b-5679-bb68-fd0bf93b2653", 00:19:14.314 "is_configured": true, 00:19:14.314 "data_offset": 2048, 00:19:14.314 "data_size": 63488 00:19:14.314 }, 00:19:14.314 { 00:19:14.314 "name": "BaseBdev3", 00:19:14.314 "uuid": "24d07dd7-f78a-5198-ad8d-dcfaba693d28", 00:19:14.314 "is_configured": true, 00:19:14.314 "data_offset": 2048, 00:19:14.314 "data_size": 63488 00:19:14.314 }, 00:19:14.314 { 00:19:14.314 "name": "BaseBdev4", 00:19:14.314 "uuid": "4ad41955-09ee-5e7d-942f-e09a5455b1ce", 00:19:14.314 "is_configured": true, 00:19:14.314 "data_offset": 2048, 00:19:14.314 "data_size": 63488 00:19:14.314 } 00:19:14.314 ] 00:19:14.314 }' 00:19:14.314 13:39:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:14.314 13:39:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:14.314 13:39:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:14.314 13:39:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:14.314 13:39:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:19:14.314 13:39:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.314 13:39:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:14.314 [2024-11-20 13:39:13.750424] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:14.573 [2024-11-20 13:39:13.832175] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:14.573 [2024-11-20 13:39:13.832434] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:14.573 [2024-11-20 13:39:13.832630] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:14.573 [2024-11-20 13:39:13.832679] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:14.573 13:39:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.573 13:39:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:19:14.573 13:39:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:14.573 13:39:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:14.573 13:39:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:14.573 13:39:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:14.573 13:39:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:14.573 13:39:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:14.573 13:39:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:14.573 13:39:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:14.573 13:39:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:14.573 13:39:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:14.573 13:39:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:14.573 13:39:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.573 13:39:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:14.573 13:39:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.573 13:39:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:14.573 "name": "raid_bdev1", 00:19:14.573 "uuid": "7216608b-2e23-465d-ac44-ba3e9a6480e1", 00:19:14.573 "strip_size_kb": 0, 00:19:14.573 "state": "online", 00:19:14.573 "raid_level": "raid1", 00:19:14.573 "superblock": true, 00:19:14.573 "num_base_bdevs": 4, 00:19:14.573 "num_base_bdevs_discovered": 3, 00:19:14.573 "num_base_bdevs_operational": 3, 00:19:14.573 "base_bdevs_list": [ 00:19:14.573 { 00:19:14.573 "name": null, 00:19:14.573 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:14.573 "is_configured": false, 00:19:14.573 "data_offset": 0, 00:19:14.573 "data_size": 63488 00:19:14.573 }, 00:19:14.573 { 00:19:14.573 "name": "BaseBdev2", 00:19:14.573 "uuid": "fcddc976-313b-5679-bb68-fd0bf93b2653", 00:19:14.573 "is_configured": true, 00:19:14.573 "data_offset": 2048, 00:19:14.573 "data_size": 63488 00:19:14.573 }, 00:19:14.573 { 00:19:14.573 "name": "BaseBdev3", 00:19:14.573 "uuid": "24d07dd7-f78a-5198-ad8d-dcfaba693d28", 00:19:14.573 "is_configured": true, 00:19:14.573 "data_offset": 2048, 00:19:14.573 "data_size": 63488 00:19:14.573 }, 00:19:14.573 { 00:19:14.573 "name": "BaseBdev4", 00:19:14.573 "uuid": "4ad41955-09ee-5e7d-942f-e09a5455b1ce", 00:19:14.573 "is_configured": true, 00:19:14.573 "data_offset": 2048, 00:19:14.573 "data_size": 63488 00:19:14.573 } 00:19:14.573 ] 00:19:14.573 }' 00:19:14.573 13:39:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:14.573 13:39:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:14.832 13:39:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:14.832 13:39:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:14.832 13:39:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:14.832 13:39:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:14.832 13:39:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:14.832 13:39:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:14.832 13:39:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:14.832 13:39:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.832 13:39:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:14.832 13:39:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.832 13:39:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:14.832 "name": "raid_bdev1", 00:19:14.832 "uuid": "7216608b-2e23-465d-ac44-ba3e9a6480e1", 00:19:14.832 "strip_size_kb": 0, 00:19:14.832 "state": "online", 00:19:14.832 "raid_level": "raid1", 00:19:14.832 "superblock": true, 00:19:14.832 "num_base_bdevs": 4, 00:19:14.832 "num_base_bdevs_discovered": 3, 00:19:14.832 "num_base_bdevs_operational": 3, 00:19:14.832 "base_bdevs_list": [ 00:19:14.832 { 00:19:14.832 "name": null, 00:19:14.832 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:14.832 "is_configured": false, 00:19:14.832 "data_offset": 0, 00:19:14.832 "data_size": 63488 00:19:14.832 }, 00:19:14.832 { 00:19:14.832 "name": "BaseBdev2", 00:19:14.832 "uuid": "fcddc976-313b-5679-bb68-fd0bf93b2653", 00:19:14.832 "is_configured": true, 00:19:14.832 "data_offset": 2048, 00:19:14.832 "data_size": 63488 00:19:14.832 }, 00:19:14.832 { 00:19:14.832 "name": "BaseBdev3", 00:19:14.832 "uuid": "24d07dd7-f78a-5198-ad8d-dcfaba693d28", 00:19:14.832 "is_configured": true, 00:19:14.832 "data_offset": 2048, 00:19:14.832 "data_size": 63488 00:19:14.832 }, 00:19:14.832 { 00:19:14.832 "name": "BaseBdev4", 00:19:14.832 "uuid": "4ad41955-09ee-5e7d-942f-e09a5455b1ce", 00:19:14.832 "is_configured": true, 00:19:14.832 "data_offset": 2048, 00:19:14.832 "data_size": 63488 00:19:14.832 } 00:19:14.832 ] 00:19:14.832 }' 00:19:14.832 13:39:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:15.116 13:39:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:15.116 13:39:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:15.116 13:39:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:15.116 13:39:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:15.116 13:39:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.116 13:39:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:15.116 [2024-11-20 13:39:14.388408] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:15.116 [2024-11-20 13:39:14.404032] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca35d0 00:19:15.116 13:39:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.116 13:39:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:19:15.116 [2024-11-20 13:39:14.406366] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:16.050 13:39:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:16.050 13:39:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:16.050 13:39:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:16.050 13:39:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:16.050 13:39:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:16.050 13:39:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:16.051 13:39:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:16.051 13:39:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.051 13:39:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:16.051 13:39:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.051 13:39:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:16.051 "name": "raid_bdev1", 00:19:16.051 "uuid": "7216608b-2e23-465d-ac44-ba3e9a6480e1", 00:19:16.051 "strip_size_kb": 0, 00:19:16.051 "state": "online", 00:19:16.051 "raid_level": "raid1", 00:19:16.051 "superblock": true, 00:19:16.051 "num_base_bdevs": 4, 00:19:16.051 "num_base_bdevs_discovered": 4, 00:19:16.051 "num_base_bdevs_operational": 4, 00:19:16.051 "process": { 00:19:16.051 "type": "rebuild", 00:19:16.051 "target": "spare", 00:19:16.051 "progress": { 00:19:16.051 "blocks": 20480, 00:19:16.051 "percent": 32 00:19:16.051 } 00:19:16.051 }, 00:19:16.051 "base_bdevs_list": [ 00:19:16.051 { 00:19:16.051 "name": "spare", 00:19:16.051 "uuid": "023b668c-990c-5ddf-8a1d-e355fdcbb404", 00:19:16.051 "is_configured": true, 00:19:16.051 "data_offset": 2048, 00:19:16.051 "data_size": 63488 00:19:16.051 }, 00:19:16.051 { 00:19:16.051 "name": "BaseBdev2", 00:19:16.051 "uuid": "fcddc976-313b-5679-bb68-fd0bf93b2653", 00:19:16.051 "is_configured": true, 00:19:16.051 "data_offset": 2048, 00:19:16.051 "data_size": 63488 00:19:16.051 }, 00:19:16.051 { 00:19:16.051 "name": "BaseBdev3", 00:19:16.051 "uuid": "24d07dd7-f78a-5198-ad8d-dcfaba693d28", 00:19:16.051 "is_configured": true, 00:19:16.051 "data_offset": 2048, 00:19:16.051 "data_size": 63488 00:19:16.051 }, 00:19:16.051 { 00:19:16.051 "name": "BaseBdev4", 00:19:16.051 "uuid": "4ad41955-09ee-5e7d-942f-e09a5455b1ce", 00:19:16.051 "is_configured": true, 00:19:16.051 "data_offset": 2048, 00:19:16.051 "data_size": 63488 00:19:16.051 } 00:19:16.051 ] 00:19:16.051 }' 00:19:16.051 13:39:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:16.051 13:39:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:16.051 13:39:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:16.310 13:39:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:16.310 13:39:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:19:16.310 13:39:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:19:16.310 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:19:16.310 13:39:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:19:16.310 13:39:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:19:16.310 13:39:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:19:16.310 13:39:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:19:16.310 13:39:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.310 13:39:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:16.310 [2024-11-20 13:39:15.558311] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:19:16.310 [2024-11-20 13:39:15.711688] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000ca35d0 00:19:16.310 13:39:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.310 13:39:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:19:16.310 13:39:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:19:16.310 13:39:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:16.310 13:39:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:16.310 13:39:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:16.310 13:39:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:16.310 13:39:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:16.310 13:39:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:16.310 13:39:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:16.310 13:39:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.310 13:39:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:16.310 13:39:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.310 13:39:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:16.310 "name": "raid_bdev1", 00:19:16.310 "uuid": "7216608b-2e23-465d-ac44-ba3e9a6480e1", 00:19:16.310 "strip_size_kb": 0, 00:19:16.310 "state": "online", 00:19:16.310 "raid_level": "raid1", 00:19:16.310 "superblock": true, 00:19:16.310 "num_base_bdevs": 4, 00:19:16.310 "num_base_bdevs_discovered": 3, 00:19:16.310 "num_base_bdevs_operational": 3, 00:19:16.310 "process": { 00:19:16.310 "type": "rebuild", 00:19:16.310 "target": "spare", 00:19:16.310 "progress": { 00:19:16.310 "blocks": 24576, 00:19:16.310 "percent": 38 00:19:16.310 } 00:19:16.310 }, 00:19:16.310 "base_bdevs_list": [ 00:19:16.310 { 00:19:16.310 "name": "spare", 00:19:16.310 "uuid": "023b668c-990c-5ddf-8a1d-e355fdcbb404", 00:19:16.310 "is_configured": true, 00:19:16.310 "data_offset": 2048, 00:19:16.310 "data_size": 63488 00:19:16.310 }, 00:19:16.310 { 00:19:16.310 "name": null, 00:19:16.310 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:16.310 "is_configured": false, 00:19:16.310 "data_offset": 0, 00:19:16.310 "data_size": 63488 00:19:16.310 }, 00:19:16.310 { 00:19:16.310 "name": "BaseBdev3", 00:19:16.310 "uuid": "24d07dd7-f78a-5198-ad8d-dcfaba693d28", 00:19:16.310 "is_configured": true, 00:19:16.310 "data_offset": 2048, 00:19:16.310 "data_size": 63488 00:19:16.310 }, 00:19:16.310 { 00:19:16.310 "name": "BaseBdev4", 00:19:16.310 "uuid": "4ad41955-09ee-5e7d-942f-e09a5455b1ce", 00:19:16.310 "is_configured": true, 00:19:16.310 "data_offset": 2048, 00:19:16.310 "data_size": 63488 00:19:16.310 } 00:19:16.310 ] 00:19:16.310 }' 00:19:16.310 13:39:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:16.569 13:39:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:16.569 13:39:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:16.570 13:39:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:16.570 13:39:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=463 00:19:16.570 13:39:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:16.570 13:39:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:16.570 13:39:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:16.570 13:39:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:16.570 13:39:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:16.570 13:39:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:16.570 13:39:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:16.570 13:39:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:16.570 13:39:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.570 13:39:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:16.570 13:39:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.570 13:39:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:16.570 "name": "raid_bdev1", 00:19:16.570 "uuid": "7216608b-2e23-465d-ac44-ba3e9a6480e1", 00:19:16.570 "strip_size_kb": 0, 00:19:16.570 "state": "online", 00:19:16.570 "raid_level": "raid1", 00:19:16.570 "superblock": true, 00:19:16.570 "num_base_bdevs": 4, 00:19:16.570 "num_base_bdevs_discovered": 3, 00:19:16.570 "num_base_bdevs_operational": 3, 00:19:16.570 "process": { 00:19:16.570 "type": "rebuild", 00:19:16.570 "target": "spare", 00:19:16.570 "progress": { 00:19:16.570 "blocks": 26624, 00:19:16.570 "percent": 41 00:19:16.570 } 00:19:16.570 }, 00:19:16.570 "base_bdevs_list": [ 00:19:16.570 { 00:19:16.570 "name": "spare", 00:19:16.570 "uuid": "023b668c-990c-5ddf-8a1d-e355fdcbb404", 00:19:16.570 "is_configured": true, 00:19:16.570 "data_offset": 2048, 00:19:16.570 "data_size": 63488 00:19:16.570 }, 00:19:16.570 { 00:19:16.570 "name": null, 00:19:16.570 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:16.570 "is_configured": false, 00:19:16.570 "data_offset": 0, 00:19:16.570 "data_size": 63488 00:19:16.570 }, 00:19:16.570 { 00:19:16.570 "name": "BaseBdev3", 00:19:16.570 "uuid": "24d07dd7-f78a-5198-ad8d-dcfaba693d28", 00:19:16.570 "is_configured": true, 00:19:16.570 "data_offset": 2048, 00:19:16.570 "data_size": 63488 00:19:16.570 }, 00:19:16.570 { 00:19:16.570 "name": "BaseBdev4", 00:19:16.570 "uuid": "4ad41955-09ee-5e7d-942f-e09a5455b1ce", 00:19:16.570 "is_configured": true, 00:19:16.570 "data_offset": 2048, 00:19:16.570 "data_size": 63488 00:19:16.570 } 00:19:16.570 ] 00:19:16.570 }' 00:19:16.570 13:39:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:16.570 13:39:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:16.570 13:39:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:16.570 13:39:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:16.570 13:39:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:17.507 13:39:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:17.507 13:39:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:17.507 13:39:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:17.507 13:39:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:17.507 13:39:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:17.507 13:39:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:17.765 13:39:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:17.765 13:39:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:17.765 13:39:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.765 13:39:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:17.766 13:39:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.766 13:39:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:17.766 "name": "raid_bdev1", 00:19:17.766 "uuid": "7216608b-2e23-465d-ac44-ba3e9a6480e1", 00:19:17.766 "strip_size_kb": 0, 00:19:17.766 "state": "online", 00:19:17.766 "raid_level": "raid1", 00:19:17.766 "superblock": true, 00:19:17.766 "num_base_bdevs": 4, 00:19:17.766 "num_base_bdevs_discovered": 3, 00:19:17.766 "num_base_bdevs_operational": 3, 00:19:17.766 "process": { 00:19:17.766 "type": "rebuild", 00:19:17.766 "target": "spare", 00:19:17.766 "progress": { 00:19:17.766 "blocks": 49152, 00:19:17.766 "percent": 77 00:19:17.766 } 00:19:17.766 }, 00:19:17.766 "base_bdevs_list": [ 00:19:17.766 { 00:19:17.766 "name": "spare", 00:19:17.766 "uuid": "023b668c-990c-5ddf-8a1d-e355fdcbb404", 00:19:17.766 "is_configured": true, 00:19:17.766 "data_offset": 2048, 00:19:17.766 "data_size": 63488 00:19:17.766 }, 00:19:17.766 { 00:19:17.766 "name": null, 00:19:17.766 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:17.766 "is_configured": false, 00:19:17.766 "data_offset": 0, 00:19:17.766 "data_size": 63488 00:19:17.766 }, 00:19:17.766 { 00:19:17.766 "name": "BaseBdev3", 00:19:17.766 "uuid": "24d07dd7-f78a-5198-ad8d-dcfaba693d28", 00:19:17.766 "is_configured": true, 00:19:17.766 "data_offset": 2048, 00:19:17.766 "data_size": 63488 00:19:17.766 }, 00:19:17.766 { 00:19:17.766 "name": "BaseBdev4", 00:19:17.766 "uuid": "4ad41955-09ee-5e7d-942f-e09a5455b1ce", 00:19:17.766 "is_configured": true, 00:19:17.766 "data_offset": 2048, 00:19:17.766 "data_size": 63488 00:19:17.766 } 00:19:17.766 ] 00:19:17.766 }' 00:19:17.766 13:39:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:17.766 13:39:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:17.766 13:39:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:17.766 13:39:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:17.766 13:39:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:18.334 [2024-11-20 13:39:17.620678] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:19:18.334 [2024-11-20 13:39:17.620773] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:19:18.334 [2024-11-20 13:39:17.620916] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:18.901 13:39:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:18.901 13:39:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:18.901 13:39:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:18.901 13:39:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:18.901 13:39:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:18.901 13:39:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:18.901 13:39:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:18.901 13:39:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:18.901 13:39:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.901 13:39:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:18.901 13:39:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.901 13:39:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:18.901 "name": "raid_bdev1", 00:19:18.901 "uuid": "7216608b-2e23-465d-ac44-ba3e9a6480e1", 00:19:18.901 "strip_size_kb": 0, 00:19:18.901 "state": "online", 00:19:18.901 "raid_level": "raid1", 00:19:18.901 "superblock": true, 00:19:18.901 "num_base_bdevs": 4, 00:19:18.901 "num_base_bdevs_discovered": 3, 00:19:18.901 "num_base_bdevs_operational": 3, 00:19:18.901 "base_bdevs_list": [ 00:19:18.901 { 00:19:18.901 "name": "spare", 00:19:18.901 "uuid": "023b668c-990c-5ddf-8a1d-e355fdcbb404", 00:19:18.901 "is_configured": true, 00:19:18.901 "data_offset": 2048, 00:19:18.901 "data_size": 63488 00:19:18.901 }, 00:19:18.901 { 00:19:18.901 "name": null, 00:19:18.901 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:18.901 "is_configured": false, 00:19:18.901 "data_offset": 0, 00:19:18.901 "data_size": 63488 00:19:18.901 }, 00:19:18.901 { 00:19:18.901 "name": "BaseBdev3", 00:19:18.901 "uuid": "24d07dd7-f78a-5198-ad8d-dcfaba693d28", 00:19:18.901 "is_configured": true, 00:19:18.901 "data_offset": 2048, 00:19:18.901 "data_size": 63488 00:19:18.901 }, 00:19:18.901 { 00:19:18.901 "name": "BaseBdev4", 00:19:18.901 "uuid": "4ad41955-09ee-5e7d-942f-e09a5455b1ce", 00:19:18.901 "is_configured": true, 00:19:18.901 "data_offset": 2048, 00:19:18.901 "data_size": 63488 00:19:18.901 } 00:19:18.901 ] 00:19:18.901 }' 00:19:18.901 13:39:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:18.901 13:39:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:19:18.901 13:39:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:18.901 13:39:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:19:18.901 13:39:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:19:18.901 13:39:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:18.901 13:39:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:18.901 13:39:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:18.901 13:39:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:18.901 13:39:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:18.901 13:39:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:18.901 13:39:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.901 13:39:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:18.901 13:39:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:18.901 13:39:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.901 13:39:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:18.901 "name": "raid_bdev1", 00:19:18.901 "uuid": "7216608b-2e23-465d-ac44-ba3e9a6480e1", 00:19:18.901 "strip_size_kb": 0, 00:19:18.901 "state": "online", 00:19:18.901 "raid_level": "raid1", 00:19:18.901 "superblock": true, 00:19:18.901 "num_base_bdevs": 4, 00:19:18.901 "num_base_bdevs_discovered": 3, 00:19:18.901 "num_base_bdevs_operational": 3, 00:19:18.901 "base_bdevs_list": [ 00:19:18.901 { 00:19:18.901 "name": "spare", 00:19:18.901 "uuid": "023b668c-990c-5ddf-8a1d-e355fdcbb404", 00:19:18.901 "is_configured": true, 00:19:18.901 "data_offset": 2048, 00:19:18.901 "data_size": 63488 00:19:18.901 }, 00:19:18.901 { 00:19:18.901 "name": null, 00:19:18.901 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:18.901 "is_configured": false, 00:19:18.901 "data_offset": 0, 00:19:18.901 "data_size": 63488 00:19:18.901 }, 00:19:18.901 { 00:19:18.901 "name": "BaseBdev3", 00:19:18.901 "uuid": "24d07dd7-f78a-5198-ad8d-dcfaba693d28", 00:19:18.901 "is_configured": true, 00:19:18.901 "data_offset": 2048, 00:19:18.901 "data_size": 63488 00:19:18.901 }, 00:19:18.901 { 00:19:18.901 "name": "BaseBdev4", 00:19:18.901 "uuid": "4ad41955-09ee-5e7d-942f-e09a5455b1ce", 00:19:18.901 "is_configured": true, 00:19:18.902 "data_offset": 2048, 00:19:18.902 "data_size": 63488 00:19:18.902 } 00:19:18.902 ] 00:19:18.902 }' 00:19:18.902 13:39:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:18.902 13:39:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:18.902 13:39:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:18.902 13:39:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:18.902 13:39:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:19:18.902 13:39:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:18.902 13:39:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:18.902 13:39:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:18.902 13:39:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:18.902 13:39:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:18.902 13:39:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:18.902 13:39:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:18.902 13:39:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:18.902 13:39:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:19.160 13:39:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:19.160 13:39:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:19.160 13:39:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.160 13:39:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:19.160 13:39:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.160 13:39:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:19.160 "name": "raid_bdev1", 00:19:19.160 "uuid": "7216608b-2e23-465d-ac44-ba3e9a6480e1", 00:19:19.160 "strip_size_kb": 0, 00:19:19.160 "state": "online", 00:19:19.160 "raid_level": "raid1", 00:19:19.160 "superblock": true, 00:19:19.160 "num_base_bdevs": 4, 00:19:19.160 "num_base_bdevs_discovered": 3, 00:19:19.160 "num_base_bdevs_operational": 3, 00:19:19.160 "base_bdevs_list": [ 00:19:19.160 { 00:19:19.160 "name": "spare", 00:19:19.160 "uuid": "023b668c-990c-5ddf-8a1d-e355fdcbb404", 00:19:19.160 "is_configured": true, 00:19:19.160 "data_offset": 2048, 00:19:19.160 "data_size": 63488 00:19:19.160 }, 00:19:19.160 { 00:19:19.160 "name": null, 00:19:19.160 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:19.160 "is_configured": false, 00:19:19.160 "data_offset": 0, 00:19:19.160 "data_size": 63488 00:19:19.160 }, 00:19:19.160 { 00:19:19.160 "name": "BaseBdev3", 00:19:19.160 "uuid": "24d07dd7-f78a-5198-ad8d-dcfaba693d28", 00:19:19.160 "is_configured": true, 00:19:19.160 "data_offset": 2048, 00:19:19.160 "data_size": 63488 00:19:19.160 }, 00:19:19.160 { 00:19:19.160 "name": "BaseBdev4", 00:19:19.160 "uuid": "4ad41955-09ee-5e7d-942f-e09a5455b1ce", 00:19:19.160 "is_configured": true, 00:19:19.160 "data_offset": 2048, 00:19:19.160 "data_size": 63488 00:19:19.160 } 00:19:19.160 ] 00:19:19.160 }' 00:19:19.160 13:39:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:19.160 13:39:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:19.419 13:39:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:19.419 13:39:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.419 13:39:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:19.419 [2024-11-20 13:39:18.802692] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:19.419 [2024-11-20 13:39:18.802728] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:19.419 [2024-11-20 13:39:18.802817] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:19.419 [2024-11-20 13:39:18.802905] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:19.419 [2024-11-20 13:39:18.802919] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:19:19.419 13:39:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.419 13:39:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:19:19.419 13:39:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:19.419 13:39:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.419 13:39:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:19.419 13:39:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.419 13:39:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:19:19.419 13:39:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:19:19.419 13:39:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:19:19.419 13:39:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:19:19.419 13:39:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:19:19.419 13:39:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:19:19.419 13:39:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:19.419 13:39:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:19:19.419 13:39:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:19.419 13:39:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:19:19.419 13:39:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:19.419 13:39:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:19.419 13:39:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:19:19.679 /dev/nbd0 00:19:19.679 13:39:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:19.679 13:39:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:19.679 13:39:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:19:19.679 13:39:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:19:19.679 13:39:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:19.679 13:39:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:19.679 13:39:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:19:19.679 13:39:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:19:19.679 13:39:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:19.679 13:39:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:19.679 13:39:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:19.679 1+0 records in 00:19:19.679 1+0 records out 00:19:19.679 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000342394 s, 12.0 MB/s 00:19:19.679 13:39:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:19.679 13:39:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:19:19.679 13:39:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:19.679 13:39:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:19.679 13:39:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:19:19.679 13:39:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:19.679 13:39:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:19.679 13:39:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:19:19.938 /dev/nbd1 00:19:19.938 13:39:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:19:19.938 13:39:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:19:19.938 13:39:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:19:19.938 13:39:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:19:19.938 13:39:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:19.938 13:39:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:19.938 13:39:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:19:19.938 13:39:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:19:19.938 13:39:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:19.938 13:39:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:19.938 13:39:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:19.938 1+0 records in 00:19:19.938 1+0 records out 00:19:19.938 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00049431 s, 8.3 MB/s 00:19:19.938 13:39:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:19.938 13:39:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:19:19.938 13:39:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:19.938 13:39:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:19.938 13:39:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:19:19.938 13:39:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:19.938 13:39:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:19.938 13:39:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:19:20.194 13:39:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:19:20.194 13:39:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:19:20.194 13:39:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:19:20.194 13:39:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:20.194 13:39:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:19:20.194 13:39:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:20.194 13:39:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:19:20.483 13:39:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:20.483 13:39:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:20.483 13:39:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:20.483 13:39:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:20.483 13:39:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:20.483 13:39:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:20.483 13:39:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:19:20.483 13:39:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:19:20.483 13:39:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:20.483 13:39:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:19:20.752 13:39:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:19:20.752 13:39:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:19:20.752 13:39:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:19:20.752 13:39:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:20.752 13:39:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:20.752 13:39:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:19:20.752 13:39:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:19:20.752 13:39:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:19:20.752 13:39:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:19:20.752 13:39:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:19:20.752 13:39:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.752 13:39:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:20.752 13:39:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.752 13:39:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:19:20.752 13:39:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.752 13:39:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:20.752 [2024-11-20 13:39:20.138073] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:20.752 [2024-11-20 13:39:20.138144] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:20.752 [2024-11-20 13:39:20.138174] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:19:20.752 [2024-11-20 13:39:20.138189] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:20.752 [2024-11-20 13:39:20.140830] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:20.752 [2024-11-20 13:39:20.140874] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:20.752 [2024-11-20 13:39:20.140969] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:19:20.752 [2024-11-20 13:39:20.141027] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:20.752 [2024-11-20 13:39:20.141198] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:20.752 [2024-11-20 13:39:20.141288] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:19:20.752 spare 00:19:20.752 13:39:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.752 13:39:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:19:20.752 13:39:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.752 13:39:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:21.011 [2024-11-20 13:39:20.241213] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:19:21.011 [2024-11-20 13:39:20.241354] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:19:21.011 [2024-11-20 13:39:20.241662] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1c80 00:19:21.011 [2024-11-20 13:39:20.241864] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:19:21.011 [2024-11-20 13:39:20.241880] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:19:21.011 [2024-11-20 13:39:20.242120] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:21.011 13:39:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.011 13:39:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:19:21.011 13:39:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:21.011 13:39:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:21.011 13:39:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:21.011 13:39:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:21.011 13:39:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:21.011 13:39:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:21.011 13:39:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:21.011 13:39:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:21.011 13:39:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:21.011 13:39:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:21.011 13:39:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.011 13:39:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:21.011 13:39:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:21.011 13:39:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.011 13:39:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:21.011 "name": "raid_bdev1", 00:19:21.011 "uuid": "7216608b-2e23-465d-ac44-ba3e9a6480e1", 00:19:21.011 "strip_size_kb": 0, 00:19:21.011 "state": "online", 00:19:21.011 "raid_level": "raid1", 00:19:21.011 "superblock": true, 00:19:21.011 "num_base_bdevs": 4, 00:19:21.011 "num_base_bdevs_discovered": 3, 00:19:21.011 "num_base_bdevs_operational": 3, 00:19:21.011 "base_bdevs_list": [ 00:19:21.011 { 00:19:21.011 "name": "spare", 00:19:21.011 "uuid": "023b668c-990c-5ddf-8a1d-e355fdcbb404", 00:19:21.011 "is_configured": true, 00:19:21.011 "data_offset": 2048, 00:19:21.011 "data_size": 63488 00:19:21.011 }, 00:19:21.011 { 00:19:21.011 "name": null, 00:19:21.011 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:21.011 "is_configured": false, 00:19:21.011 "data_offset": 2048, 00:19:21.011 "data_size": 63488 00:19:21.011 }, 00:19:21.011 { 00:19:21.011 "name": "BaseBdev3", 00:19:21.011 "uuid": "24d07dd7-f78a-5198-ad8d-dcfaba693d28", 00:19:21.011 "is_configured": true, 00:19:21.011 "data_offset": 2048, 00:19:21.011 "data_size": 63488 00:19:21.011 }, 00:19:21.011 { 00:19:21.011 "name": "BaseBdev4", 00:19:21.011 "uuid": "4ad41955-09ee-5e7d-942f-e09a5455b1ce", 00:19:21.011 "is_configured": true, 00:19:21.011 "data_offset": 2048, 00:19:21.011 "data_size": 63488 00:19:21.011 } 00:19:21.011 ] 00:19:21.011 }' 00:19:21.011 13:39:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:21.011 13:39:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:21.269 13:39:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:21.269 13:39:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:21.269 13:39:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:21.269 13:39:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:21.269 13:39:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:21.269 13:39:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:21.269 13:39:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.269 13:39:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:21.269 13:39:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:21.269 13:39:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.269 13:39:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:21.269 "name": "raid_bdev1", 00:19:21.269 "uuid": "7216608b-2e23-465d-ac44-ba3e9a6480e1", 00:19:21.269 "strip_size_kb": 0, 00:19:21.269 "state": "online", 00:19:21.269 "raid_level": "raid1", 00:19:21.269 "superblock": true, 00:19:21.269 "num_base_bdevs": 4, 00:19:21.269 "num_base_bdevs_discovered": 3, 00:19:21.269 "num_base_bdevs_operational": 3, 00:19:21.269 "base_bdevs_list": [ 00:19:21.269 { 00:19:21.269 "name": "spare", 00:19:21.269 "uuid": "023b668c-990c-5ddf-8a1d-e355fdcbb404", 00:19:21.269 "is_configured": true, 00:19:21.269 "data_offset": 2048, 00:19:21.269 "data_size": 63488 00:19:21.269 }, 00:19:21.269 { 00:19:21.269 "name": null, 00:19:21.269 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:21.269 "is_configured": false, 00:19:21.269 "data_offset": 2048, 00:19:21.269 "data_size": 63488 00:19:21.269 }, 00:19:21.269 { 00:19:21.269 "name": "BaseBdev3", 00:19:21.269 "uuid": "24d07dd7-f78a-5198-ad8d-dcfaba693d28", 00:19:21.269 "is_configured": true, 00:19:21.269 "data_offset": 2048, 00:19:21.269 "data_size": 63488 00:19:21.269 }, 00:19:21.269 { 00:19:21.269 "name": "BaseBdev4", 00:19:21.269 "uuid": "4ad41955-09ee-5e7d-942f-e09a5455b1ce", 00:19:21.269 "is_configured": true, 00:19:21.269 "data_offset": 2048, 00:19:21.269 "data_size": 63488 00:19:21.269 } 00:19:21.269 ] 00:19:21.269 }' 00:19:21.269 13:39:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:21.527 13:39:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:21.527 13:39:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:21.527 13:39:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:21.527 13:39:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:19:21.527 13:39:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:21.527 13:39:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.527 13:39:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:21.527 13:39:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.527 13:39:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:19:21.527 13:39:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:19:21.527 13:39:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.527 13:39:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:21.527 [2024-11-20 13:39:20.893203] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:21.527 13:39:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.527 13:39:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:21.527 13:39:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:21.527 13:39:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:21.527 13:39:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:21.527 13:39:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:21.527 13:39:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:21.527 13:39:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:21.527 13:39:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:21.527 13:39:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:21.527 13:39:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:21.527 13:39:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:21.527 13:39:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:21.527 13:39:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.527 13:39:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:21.527 13:39:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.527 13:39:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:21.527 "name": "raid_bdev1", 00:19:21.527 "uuid": "7216608b-2e23-465d-ac44-ba3e9a6480e1", 00:19:21.527 "strip_size_kb": 0, 00:19:21.527 "state": "online", 00:19:21.527 "raid_level": "raid1", 00:19:21.527 "superblock": true, 00:19:21.527 "num_base_bdevs": 4, 00:19:21.527 "num_base_bdevs_discovered": 2, 00:19:21.527 "num_base_bdevs_operational": 2, 00:19:21.527 "base_bdevs_list": [ 00:19:21.527 { 00:19:21.527 "name": null, 00:19:21.527 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:21.527 "is_configured": false, 00:19:21.527 "data_offset": 0, 00:19:21.527 "data_size": 63488 00:19:21.527 }, 00:19:21.527 { 00:19:21.527 "name": null, 00:19:21.527 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:21.527 "is_configured": false, 00:19:21.527 "data_offset": 2048, 00:19:21.527 "data_size": 63488 00:19:21.527 }, 00:19:21.527 { 00:19:21.527 "name": "BaseBdev3", 00:19:21.527 "uuid": "24d07dd7-f78a-5198-ad8d-dcfaba693d28", 00:19:21.527 "is_configured": true, 00:19:21.527 "data_offset": 2048, 00:19:21.527 "data_size": 63488 00:19:21.527 }, 00:19:21.527 { 00:19:21.527 "name": "BaseBdev4", 00:19:21.527 "uuid": "4ad41955-09ee-5e7d-942f-e09a5455b1ce", 00:19:21.527 "is_configured": true, 00:19:21.527 "data_offset": 2048, 00:19:21.527 "data_size": 63488 00:19:21.527 } 00:19:21.527 ] 00:19:21.527 }' 00:19:21.527 13:39:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:21.527 13:39:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:22.093 13:39:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:22.093 13:39:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.093 13:39:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:22.093 [2024-11-20 13:39:21.292743] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:22.093 [2024-11-20 13:39:21.293134] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:19:22.093 [2024-11-20 13:39:21.293283] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:19:22.093 [2024-11-20 13:39:21.293400] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:22.093 [2024-11-20 13:39:21.308733] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1d50 00:19:22.093 13:39:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:22.093 13:39:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:19:22.093 [2024-11-20 13:39:21.311059] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:23.024 13:39:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:23.024 13:39:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:23.024 13:39:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:23.024 13:39:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:23.024 13:39:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:23.024 13:39:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:23.024 13:39:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.024 13:39:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:23.024 13:39:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:23.024 13:39:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.024 13:39:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:23.024 "name": "raid_bdev1", 00:19:23.024 "uuid": "7216608b-2e23-465d-ac44-ba3e9a6480e1", 00:19:23.024 "strip_size_kb": 0, 00:19:23.024 "state": "online", 00:19:23.024 "raid_level": "raid1", 00:19:23.024 "superblock": true, 00:19:23.024 "num_base_bdevs": 4, 00:19:23.024 "num_base_bdevs_discovered": 3, 00:19:23.024 "num_base_bdevs_operational": 3, 00:19:23.024 "process": { 00:19:23.024 "type": "rebuild", 00:19:23.024 "target": "spare", 00:19:23.024 "progress": { 00:19:23.024 "blocks": 20480, 00:19:23.024 "percent": 32 00:19:23.024 } 00:19:23.024 }, 00:19:23.024 "base_bdevs_list": [ 00:19:23.024 { 00:19:23.024 "name": "spare", 00:19:23.024 "uuid": "023b668c-990c-5ddf-8a1d-e355fdcbb404", 00:19:23.024 "is_configured": true, 00:19:23.024 "data_offset": 2048, 00:19:23.024 "data_size": 63488 00:19:23.024 }, 00:19:23.024 { 00:19:23.024 "name": null, 00:19:23.024 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:23.024 "is_configured": false, 00:19:23.024 "data_offset": 2048, 00:19:23.024 "data_size": 63488 00:19:23.024 }, 00:19:23.024 { 00:19:23.024 "name": "BaseBdev3", 00:19:23.024 "uuid": "24d07dd7-f78a-5198-ad8d-dcfaba693d28", 00:19:23.024 "is_configured": true, 00:19:23.024 "data_offset": 2048, 00:19:23.024 "data_size": 63488 00:19:23.024 }, 00:19:23.024 { 00:19:23.024 "name": "BaseBdev4", 00:19:23.024 "uuid": "4ad41955-09ee-5e7d-942f-e09a5455b1ce", 00:19:23.024 "is_configured": true, 00:19:23.024 "data_offset": 2048, 00:19:23.024 "data_size": 63488 00:19:23.024 } 00:19:23.024 ] 00:19:23.024 }' 00:19:23.024 13:39:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:23.024 13:39:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:23.024 13:39:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:23.025 13:39:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:23.025 13:39:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:19:23.025 13:39:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.025 13:39:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:23.025 [2024-11-20 13:39:22.458448] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:23.283 [2024-11-20 13:39:22.516425] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:23.283 [2024-11-20 13:39:22.516505] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:23.283 [2024-11-20 13:39:22.516528] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:23.283 [2024-11-20 13:39:22.516537] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:23.283 13:39:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.283 13:39:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:23.283 13:39:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:23.283 13:39:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:23.283 13:39:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:23.283 13:39:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:23.283 13:39:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:23.283 13:39:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:23.283 13:39:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:23.283 13:39:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:23.283 13:39:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:23.283 13:39:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:23.283 13:39:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:23.283 13:39:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.283 13:39:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:23.283 13:39:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.283 13:39:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:23.283 "name": "raid_bdev1", 00:19:23.283 "uuid": "7216608b-2e23-465d-ac44-ba3e9a6480e1", 00:19:23.283 "strip_size_kb": 0, 00:19:23.283 "state": "online", 00:19:23.283 "raid_level": "raid1", 00:19:23.283 "superblock": true, 00:19:23.283 "num_base_bdevs": 4, 00:19:23.283 "num_base_bdevs_discovered": 2, 00:19:23.283 "num_base_bdevs_operational": 2, 00:19:23.283 "base_bdevs_list": [ 00:19:23.283 { 00:19:23.283 "name": null, 00:19:23.283 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:23.283 "is_configured": false, 00:19:23.283 "data_offset": 0, 00:19:23.283 "data_size": 63488 00:19:23.283 }, 00:19:23.283 { 00:19:23.283 "name": null, 00:19:23.283 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:23.283 "is_configured": false, 00:19:23.283 "data_offset": 2048, 00:19:23.283 "data_size": 63488 00:19:23.283 }, 00:19:23.283 { 00:19:23.283 "name": "BaseBdev3", 00:19:23.283 "uuid": "24d07dd7-f78a-5198-ad8d-dcfaba693d28", 00:19:23.283 "is_configured": true, 00:19:23.283 "data_offset": 2048, 00:19:23.283 "data_size": 63488 00:19:23.283 }, 00:19:23.283 { 00:19:23.283 "name": "BaseBdev4", 00:19:23.283 "uuid": "4ad41955-09ee-5e7d-942f-e09a5455b1ce", 00:19:23.283 "is_configured": true, 00:19:23.283 "data_offset": 2048, 00:19:23.283 "data_size": 63488 00:19:23.283 } 00:19:23.283 ] 00:19:23.283 }' 00:19:23.284 13:39:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:23.284 13:39:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:23.542 13:39:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:19:23.542 13:39:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.542 13:39:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:23.542 [2024-11-20 13:39:22.938395] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:23.542 [2024-11-20 13:39:22.938463] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:23.542 [2024-11-20 13:39:22.938496] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:19:23.542 [2024-11-20 13:39:22.938508] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:23.542 [2024-11-20 13:39:22.938977] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:23.542 [2024-11-20 13:39:22.938997] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:23.542 [2024-11-20 13:39:22.939116] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:19:23.542 [2024-11-20 13:39:22.939131] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:19:23.542 [2024-11-20 13:39:22.939150] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:19:23.542 [2024-11-20 13:39:22.939173] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:23.542 [2024-11-20 13:39:22.953503] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1e20 00:19:23.542 spare 00:19:23.542 13:39:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.542 13:39:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:19:23.542 [2024-11-20 13:39:22.955641] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:24.479 13:39:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:24.479 13:39:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:24.479 13:39:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:24.479 13:39:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:24.479 13:39:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:24.735 13:39:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:24.735 13:39:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:24.735 13:39:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.735 13:39:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:24.735 13:39:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.735 13:39:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:24.735 "name": "raid_bdev1", 00:19:24.735 "uuid": "7216608b-2e23-465d-ac44-ba3e9a6480e1", 00:19:24.735 "strip_size_kb": 0, 00:19:24.735 "state": "online", 00:19:24.735 "raid_level": "raid1", 00:19:24.735 "superblock": true, 00:19:24.735 "num_base_bdevs": 4, 00:19:24.735 "num_base_bdevs_discovered": 3, 00:19:24.735 "num_base_bdevs_operational": 3, 00:19:24.735 "process": { 00:19:24.735 "type": "rebuild", 00:19:24.735 "target": "spare", 00:19:24.735 "progress": { 00:19:24.735 "blocks": 20480, 00:19:24.735 "percent": 32 00:19:24.735 } 00:19:24.735 }, 00:19:24.735 "base_bdevs_list": [ 00:19:24.735 { 00:19:24.735 "name": "spare", 00:19:24.735 "uuid": "023b668c-990c-5ddf-8a1d-e355fdcbb404", 00:19:24.735 "is_configured": true, 00:19:24.735 "data_offset": 2048, 00:19:24.735 "data_size": 63488 00:19:24.735 }, 00:19:24.735 { 00:19:24.735 "name": null, 00:19:24.735 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:24.735 "is_configured": false, 00:19:24.735 "data_offset": 2048, 00:19:24.735 "data_size": 63488 00:19:24.735 }, 00:19:24.735 { 00:19:24.735 "name": "BaseBdev3", 00:19:24.736 "uuid": "24d07dd7-f78a-5198-ad8d-dcfaba693d28", 00:19:24.736 "is_configured": true, 00:19:24.736 "data_offset": 2048, 00:19:24.736 "data_size": 63488 00:19:24.736 }, 00:19:24.736 { 00:19:24.736 "name": "BaseBdev4", 00:19:24.736 "uuid": "4ad41955-09ee-5e7d-942f-e09a5455b1ce", 00:19:24.736 "is_configured": true, 00:19:24.736 "data_offset": 2048, 00:19:24.736 "data_size": 63488 00:19:24.736 } 00:19:24.736 ] 00:19:24.736 }' 00:19:24.736 13:39:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:24.736 13:39:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:24.736 13:39:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:24.736 13:39:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:24.736 13:39:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:19:24.736 13:39:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.736 13:39:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:24.736 [2024-11-20 13:39:24.111492] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:24.736 [2024-11-20 13:39:24.161108] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:24.736 [2024-11-20 13:39:24.161190] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:24.736 [2024-11-20 13:39:24.161209] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:24.736 [2024-11-20 13:39:24.161220] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:24.736 13:39:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.736 13:39:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:24.736 13:39:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:24.736 13:39:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:24.736 13:39:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:24.736 13:39:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:24.736 13:39:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:24.736 13:39:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:24.736 13:39:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:24.736 13:39:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:24.736 13:39:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:24.736 13:39:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:24.736 13:39:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:24.736 13:39:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.736 13:39:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:24.993 13:39:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.993 13:39:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:24.993 "name": "raid_bdev1", 00:19:24.993 "uuid": "7216608b-2e23-465d-ac44-ba3e9a6480e1", 00:19:24.993 "strip_size_kb": 0, 00:19:24.993 "state": "online", 00:19:24.993 "raid_level": "raid1", 00:19:24.993 "superblock": true, 00:19:24.993 "num_base_bdevs": 4, 00:19:24.993 "num_base_bdevs_discovered": 2, 00:19:24.993 "num_base_bdevs_operational": 2, 00:19:24.993 "base_bdevs_list": [ 00:19:24.993 { 00:19:24.993 "name": null, 00:19:24.993 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:24.993 "is_configured": false, 00:19:24.993 "data_offset": 0, 00:19:24.993 "data_size": 63488 00:19:24.993 }, 00:19:24.993 { 00:19:24.994 "name": null, 00:19:24.994 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:24.994 "is_configured": false, 00:19:24.994 "data_offset": 2048, 00:19:24.994 "data_size": 63488 00:19:24.994 }, 00:19:24.994 { 00:19:24.994 "name": "BaseBdev3", 00:19:24.994 "uuid": "24d07dd7-f78a-5198-ad8d-dcfaba693d28", 00:19:24.994 "is_configured": true, 00:19:24.994 "data_offset": 2048, 00:19:24.994 "data_size": 63488 00:19:24.994 }, 00:19:24.994 { 00:19:24.994 "name": "BaseBdev4", 00:19:24.994 "uuid": "4ad41955-09ee-5e7d-942f-e09a5455b1ce", 00:19:24.994 "is_configured": true, 00:19:24.994 "data_offset": 2048, 00:19:24.994 "data_size": 63488 00:19:24.994 } 00:19:24.994 ] 00:19:24.994 }' 00:19:24.994 13:39:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:24.994 13:39:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:25.253 13:39:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:25.253 13:39:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:25.253 13:39:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:25.253 13:39:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:25.253 13:39:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:25.253 13:39:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:25.253 13:39:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:25.253 13:39:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.253 13:39:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:25.253 13:39:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.253 13:39:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:25.253 "name": "raid_bdev1", 00:19:25.253 "uuid": "7216608b-2e23-465d-ac44-ba3e9a6480e1", 00:19:25.253 "strip_size_kb": 0, 00:19:25.253 "state": "online", 00:19:25.253 "raid_level": "raid1", 00:19:25.253 "superblock": true, 00:19:25.253 "num_base_bdevs": 4, 00:19:25.253 "num_base_bdevs_discovered": 2, 00:19:25.253 "num_base_bdevs_operational": 2, 00:19:25.253 "base_bdevs_list": [ 00:19:25.253 { 00:19:25.253 "name": null, 00:19:25.253 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:25.253 "is_configured": false, 00:19:25.253 "data_offset": 0, 00:19:25.253 "data_size": 63488 00:19:25.253 }, 00:19:25.253 { 00:19:25.253 "name": null, 00:19:25.253 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:25.253 "is_configured": false, 00:19:25.253 "data_offset": 2048, 00:19:25.253 "data_size": 63488 00:19:25.253 }, 00:19:25.253 { 00:19:25.253 "name": "BaseBdev3", 00:19:25.253 "uuid": "24d07dd7-f78a-5198-ad8d-dcfaba693d28", 00:19:25.253 "is_configured": true, 00:19:25.253 "data_offset": 2048, 00:19:25.253 "data_size": 63488 00:19:25.253 }, 00:19:25.253 { 00:19:25.253 "name": "BaseBdev4", 00:19:25.253 "uuid": "4ad41955-09ee-5e7d-942f-e09a5455b1ce", 00:19:25.253 "is_configured": true, 00:19:25.253 "data_offset": 2048, 00:19:25.253 "data_size": 63488 00:19:25.253 } 00:19:25.253 ] 00:19:25.253 }' 00:19:25.253 13:39:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:25.513 13:39:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:25.513 13:39:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:25.513 13:39:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:25.513 13:39:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:19:25.513 13:39:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.513 13:39:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:25.513 13:39:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.513 13:39:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:19:25.513 13:39:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.513 13:39:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:25.513 [2024-11-20 13:39:24.800031] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:19:25.513 [2024-11-20 13:39:24.800240] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:25.513 [2024-11-20 13:39:24.800271] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:19:25.513 [2024-11-20 13:39:24.800287] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:25.513 [2024-11-20 13:39:24.800783] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:25.513 [2024-11-20 13:39:24.800806] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:25.513 [2024-11-20 13:39:24.800885] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:19:25.513 [2024-11-20 13:39:24.800909] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:19:25.513 [2024-11-20 13:39:24.800924] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:19:25.513 [2024-11-20 13:39:24.800951] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:19:25.513 BaseBdev1 00:19:25.513 13:39:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.513 13:39:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:19:26.452 13:39:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:26.452 13:39:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:26.452 13:39:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:26.452 13:39:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:26.452 13:39:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:26.452 13:39:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:26.452 13:39:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:26.452 13:39:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:26.452 13:39:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:26.452 13:39:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:26.452 13:39:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:26.452 13:39:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:26.452 13:39:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:26.452 13:39:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:26.452 13:39:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:26.452 13:39:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:26.452 "name": "raid_bdev1", 00:19:26.452 "uuid": "7216608b-2e23-465d-ac44-ba3e9a6480e1", 00:19:26.452 "strip_size_kb": 0, 00:19:26.452 "state": "online", 00:19:26.452 "raid_level": "raid1", 00:19:26.452 "superblock": true, 00:19:26.452 "num_base_bdevs": 4, 00:19:26.452 "num_base_bdevs_discovered": 2, 00:19:26.452 "num_base_bdevs_operational": 2, 00:19:26.452 "base_bdevs_list": [ 00:19:26.452 { 00:19:26.452 "name": null, 00:19:26.452 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:26.452 "is_configured": false, 00:19:26.452 "data_offset": 0, 00:19:26.452 "data_size": 63488 00:19:26.452 }, 00:19:26.452 { 00:19:26.452 "name": null, 00:19:26.452 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:26.452 "is_configured": false, 00:19:26.452 "data_offset": 2048, 00:19:26.452 "data_size": 63488 00:19:26.452 }, 00:19:26.452 { 00:19:26.452 "name": "BaseBdev3", 00:19:26.452 "uuid": "24d07dd7-f78a-5198-ad8d-dcfaba693d28", 00:19:26.452 "is_configured": true, 00:19:26.452 "data_offset": 2048, 00:19:26.452 "data_size": 63488 00:19:26.452 }, 00:19:26.452 { 00:19:26.452 "name": "BaseBdev4", 00:19:26.452 "uuid": "4ad41955-09ee-5e7d-942f-e09a5455b1ce", 00:19:26.452 "is_configured": true, 00:19:26.452 "data_offset": 2048, 00:19:26.452 "data_size": 63488 00:19:26.452 } 00:19:26.452 ] 00:19:26.452 }' 00:19:26.452 13:39:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:26.452 13:39:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:27.021 13:39:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:27.021 13:39:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:27.021 13:39:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:27.021 13:39:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:27.021 13:39:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:27.021 13:39:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:27.021 13:39:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:27.021 13:39:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.021 13:39:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:27.021 13:39:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.021 13:39:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:27.021 "name": "raid_bdev1", 00:19:27.021 "uuid": "7216608b-2e23-465d-ac44-ba3e9a6480e1", 00:19:27.021 "strip_size_kb": 0, 00:19:27.021 "state": "online", 00:19:27.021 "raid_level": "raid1", 00:19:27.021 "superblock": true, 00:19:27.021 "num_base_bdevs": 4, 00:19:27.021 "num_base_bdevs_discovered": 2, 00:19:27.021 "num_base_bdevs_operational": 2, 00:19:27.021 "base_bdevs_list": [ 00:19:27.021 { 00:19:27.021 "name": null, 00:19:27.021 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:27.021 "is_configured": false, 00:19:27.021 "data_offset": 0, 00:19:27.021 "data_size": 63488 00:19:27.021 }, 00:19:27.021 { 00:19:27.021 "name": null, 00:19:27.021 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:27.021 "is_configured": false, 00:19:27.021 "data_offset": 2048, 00:19:27.021 "data_size": 63488 00:19:27.021 }, 00:19:27.021 { 00:19:27.021 "name": "BaseBdev3", 00:19:27.021 "uuid": "24d07dd7-f78a-5198-ad8d-dcfaba693d28", 00:19:27.021 "is_configured": true, 00:19:27.021 "data_offset": 2048, 00:19:27.021 "data_size": 63488 00:19:27.021 }, 00:19:27.021 { 00:19:27.021 "name": "BaseBdev4", 00:19:27.021 "uuid": "4ad41955-09ee-5e7d-942f-e09a5455b1ce", 00:19:27.021 "is_configured": true, 00:19:27.021 "data_offset": 2048, 00:19:27.021 "data_size": 63488 00:19:27.021 } 00:19:27.021 ] 00:19:27.021 }' 00:19:27.021 13:39:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:27.021 13:39:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:27.021 13:39:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:27.021 13:39:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:27.021 13:39:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:27.021 13:39:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:19:27.021 13:39:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:27.021 13:39:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:19:27.021 13:39:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:27.021 13:39:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:19:27.021 13:39:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:27.021 13:39:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:27.021 13:39:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.021 13:39:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:27.021 [2024-11-20 13:39:26.386404] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:27.021 [2024-11-20 13:39:26.386613] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:19:27.021 [2024-11-20 13:39:26.386631] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:19:27.021 request: 00:19:27.021 { 00:19:27.021 "base_bdev": "BaseBdev1", 00:19:27.021 "raid_bdev": "raid_bdev1", 00:19:27.021 "method": "bdev_raid_add_base_bdev", 00:19:27.021 "req_id": 1 00:19:27.021 } 00:19:27.021 Got JSON-RPC error response 00:19:27.021 response: 00:19:27.021 { 00:19:27.021 "code": -22, 00:19:27.021 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:19:27.021 } 00:19:27.021 13:39:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:19:27.021 13:39:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:19:27.021 13:39:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:27.021 13:39:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:27.021 13:39:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:27.021 13:39:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:19:27.970 13:39:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:27.970 13:39:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:27.970 13:39:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:27.970 13:39:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:27.970 13:39:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:27.970 13:39:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:27.970 13:39:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:27.970 13:39:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:27.970 13:39:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:27.970 13:39:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:27.970 13:39:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:27.970 13:39:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:27.970 13:39:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.970 13:39:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:27.970 13:39:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.230 13:39:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:28.230 "name": "raid_bdev1", 00:19:28.230 "uuid": "7216608b-2e23-465d-ac44-ba3e9a6480e1", 00:19:28.230 "strip_size_kb": 0, 00:19:28.230 "state": "online", 00:19:28.230 "raid_level": "raid1", 00:19:28.230 "superblock": true, 00:19:28.230 "num_base_bdevs": 4, 00:19:28.230 "num_base_bdevs_discovered": 2, 00:19:28.230 "num_base_bdevs_operational": 2, 00:19:28.230 "base_bdevs_list": [ 00:19:28.230 { 00:19:28.230 "name": null, 00:19:28.230 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:28.230 "is_configured": false, 00:19:28.230 "data_offset": 0, 00:19:28.230 "data_size": 63488 00:19:28.230 }, 00:19:28.230 { 00:19:28.230 "name": null, 00:19:28.230 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:28.230 "is_configured": false, 00:19:28.230 "data_offset": 2048, 00:19:28.230 "data_size": 63488 00:19:28.230 }, 00:19:28.230 { 00:19:28.230 "name": "BaseBdev3", 00:19:28.230 "uuid": "24d07dd7-f78a-5198-ad8d-dcfaba693d28", 00:19:28.230 "is_configured": true, 00:19:28.230 "data_offset": 2048, 00:19:28.230 "data_size": 63488 00:19:28.230 }, 00:19:28.230 { 00:19:28.230 "name": "BaseBdev4", 00:19:28.230 "uuid": "4ad41955-09ee-5e7d-942f-e09a5455b1ce", 00:19:28.230 "is_configured": true, 00:19:28.230 "data_offset": 2048, 00:19:28.230 "data_size": 63488 00:19:28.230 } 00:19:28.230 ] 00:19:28.230 }' 00:19:28.230 13:39:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:28.230 13:39:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:28.489 13:39:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:28.489 13:39:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:28.489 13:39:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:28.489 13:39:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:28.489 13:39:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:28.489 13:39:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:28.489 13:39:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:28.489 13:39:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.489 13:39:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:28.489 13:39:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.489 13:39:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:28.489 "name": "raid_bdev1", 00:19:28.489 "uuid": "7216608b-2e23-465d-ac44-ba3e9a6480e1", 00:19:28.489 "strip_size_kb": 0, 00:19:28.489 "state": "online", 00:19:28.489 "raid_level": "raid1", 00:19:28.489 "superblock": true, 00:19:28.489 "num_base_bdevs": 4, 00:19:28.489 "num_base_bdevs_discovered": 2, 00:19:28.489 "num_base_bdevs_operational": 2, 00:19:28.489 "base_bdevs_list": [ 00:19:28.489 { 00:19:28.489 "name": null, 00:19:28.489 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:28.489 "is_configured": false, 00:19:28.489 "data_offset": 0, 00:19:28.489 "data_size": 63488 00:19:28.489 }, 00:19:28.489 { 00:19:28.489 "name": null, 00:19:28.489 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:28.489 "is_configured": false, 00:19:28.489 "data_offset": 2048, 00:19:28.489 "data_size": 63488 00:19:28.489 }, 00:19:28.489 { 00:19:28.489 "name": "BaseBdev3", 00:19:28.489 "uuid": "24d07dd7-f78a-5198-ad8d-dcfaba693d28", 00:19:28.489 "is_configured": true, 00:19:28.489 "data_offset": 2048, 00:19:28.489 "data_size": 63488 00:19:28.489 }, 00:19:28.489 { 00:19:28.489 "name": "BaseBdev4", 00:19:28.489 "uuid": "4ad41955-09ee-5e7d-942f-e09a5455b1ce", 00:19:28.489 "is_configured": true, 00:19:28.489 "data_offset": 2048, 00:19:28.489 "data_size": 63488 00:19:28.489 } 00:19:28.489 ] 00:19:28.489 }' 00:19:28.489 13:39:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:28.489 13:39:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:28.489 13:39:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:28.489 13:39:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:28.489 13:39:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 77787 00:19:28.489 13:39:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 77787 ']' 00:19:28.489 13:39:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 77787 00:19:28.489 13:39:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:19:28.489 13:39:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:28.489 13:39:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77787 00:19:28.748 killing process with pid 77787 00:19:28.749 Received shutdown signal, test time was about 60.000000 seconds 00:19:28.749 00:19:28.749 Latency(us) 00:19:28.749 [2024-11-20T13:39:28.234Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:28.749 [2024-11-20T13:39:28.234Z] =================================================================================================================== 00:19:28.749 [2024-11-20T13:39:28.234Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:28.749 13:39:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:28.749 13:39:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:28.749 13:39:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77787' 00:19:28.749 13:39:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 77787 00:19:28.749 [2024-11-20 13:39:27.993342] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:28.749 13:39:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 77787 00:19:28.749 [2024-11-20 13:39:27.993473] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:28.749 [2024-11-20 13:39:27.993545] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:28.749 [2024-11-20 13:39:27.993558] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:19:29.316 [2024-11-20 13:39:28.526376] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:30.254 13:39:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:19:30.254 00:19:30.254 real 0m26.452s 00:19:30.254 user 0m30.913s 00:19:30.254 sys 0m4.692s 00:19:30.254 13:39:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:30.254 13:39:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:30.254 ************************************ 00:19:30.254 END TEST raid_rebuild_test_sb 00:19:30.254 ************************************ 00:19:30.254 13:39:29 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 4 false true true 00:19:30.254 13:39:29 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:19:30.254 13:39:29 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:30.254 13:39:29 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:30.513 ************************************ 00:19:30.513 START TEST raid_rebuild_test_io 00:19:30.513 ************************************ 00:19:30.513 13:39:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 false true true 00:19:30.513 13:39:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:19:30.513 13:39:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:19:30.514 13:39:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:19:30.514 13:39:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:19:30.514 13:39:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:19:30.514 13:39:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:19:30.514 13:39:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:30.514 13:39:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:19:30.514 13:39:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:19:30.514 13:39:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:30.514 13:39:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:19:30.514 13:39:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:19:30.514 13:39:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:30.514 13:39:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:19:30.514 13:39:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:19:30.514 13:39:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:30.514 13:39:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:19:30.514 13:39:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:19:30.514 13:39:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:30.514 13:39:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:19:30.514 13:39:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:19:30.514 13:39:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:19:30.514 13:39:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:19:30.514 13:39:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:19:30.514 13:39:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:19:30.514 13:39:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:19:30.514 13:39:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:19:30.514 13:39:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:19:30.514 13:39:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:19:30.514 13:39:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=78552 00:19:30.514 13:39:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 78552 00:19:30.514 13:39:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:19:30.514 13:39:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@835 -- # '[' -z 78552 ']' 00:19:30.514 13:39:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:30.514 13:39:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:30.514 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:30.514 13:39:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:30.514 13:39:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:30.514 13:39:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:30.514 [2024-11-20 13:39:29.858122] Starting SPDK v25.01-pre git sha1 82b85d9ca / DPDK 24.03.0 initialization... 00:19:30.514 I/O size of 3145728 is greater than zero copy threshold (65536). 00:19:30.514 Zero copy mechanism will not be used. 00:19:30.514 [2024-11-20 13:39:29.858233] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78552 ] 00:19:30.772 [2024-11-20 13:39:30.038957] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:30.772 [2024-11-20 13:39:30.148045] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:31.029 [2024-11-20 13:39:30.331887] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:31.029 [2024-11-20 13:39:30.331950] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:31.287 13:39:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:31.287 13:39:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # return 0 00:19:31.287 13:39:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:31.287 13:39:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:19:31.287 13:39:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.287 13:39:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:31.546 BaseBdev1_malloc 00:19:31.546 13:39:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.546 13:39:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:19:31.546 13:39:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.546 13:39:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:31.546 [2024-11-20 13:39:30.788491] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:19:31.546 [2024-11-20 13:39:30.788559] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:31.546 [2024-11-20 13:39:30.788584] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:19:31.546 [2024-11-20 13:39:30.788598] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:31.546 [2024-11-20 13:39:30.790945] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:31.546 [2024-11-20 13:39:30.790990] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:31.546 BaseBdev1 00:19:31.546 13:39:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.546 13:39:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:31.546 13:39:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:19:31.546 13:39:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.546 13:39:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:31.546 BaseBdev2_malloc 00:19:31.546 13:39:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.546 13:39:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:19:31.546 13:39:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.546 13:39:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:31.546 [2024-11-20 13:39:30.841293] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:19:31.546 [2024-11-20 13:39:30.841357] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:31.546 [2024-11-20 13:39:30.841384] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:19:31.546 [2024-11-20 13:39:30.841398] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:31.546 [2024-11-20 13:39:30.843718] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:31.546 [2024-11-20 13:39:30.843763] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:19:31.546 BaseBdev2 00:19:31.546 13:39:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.546 13:39:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:31.546 13:39:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:19:31.546 13:39:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.547 13:39:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:31.547 BaseBdev3_malloc 00:19:31.547 13:39:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.547 13:39:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:19:31.547 13:39:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.547 13:39:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:31.547 [2024-11-20 13:39:30.911041] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:19:31.547 [2024-11-20 13:39:30.911146] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:31.547 [2024-11-20 13:39:30.911170] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:19:31.547 [2024-11-20 13:39:30.911184] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:31.547 [2024-11-20 13:39:30.913629] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:31.547 [2024-11-20 13:39:30.913780] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:19:31.547 BaseBdev3 00:19:31.547 13:39:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.547 13:39:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:31.547 13:39:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:19:31.547 13:39:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.547 13:39:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:31.547 BaseBdev4_malloc 00:19:31.547 13:39:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.547 13:39:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:19:31.547 13:39:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.547 13:39:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:31.547 [2024-11-20 13:39:30.971904] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:19:31.547 [2024-11-20 13:39:30.972096] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:31.547 [2024-11-20 13:39:30.972126] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:19:31.547 [2024-11-20 13:39:30.972140] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:31.547 [2024-11-20 13:39:30.974471] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:31.547 [2024-11-20 13:39:30.974508] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:19:31.547 BaseBdev4 00:19:31.547 13:39:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.547 13:39:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:19:31.547 13:39:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.547 13:39:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:31.547 spare_malloc 00:19:31.547 13:39:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.547 13:39:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:19:31.547 13:39:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.547 13:39:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:31.806 spare_delay 00:19:31.806 13:39:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.806 13:39:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:19:31.806 13:39:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.806 13:39:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:31.806 [2024-11-20 13:39:31.041052] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:31.806 [2024-11-20 13:39:31.041113] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:31.806 [2024-11-20 13:39:31.041133] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:19:31.806 [2024-11-20 13:39:31.041147] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:31.806 [2024-11-20 13:39:31.043474] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:31.806 [2024-11-20 13:39:31.043512] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:31.806 spare 00:19:31.806 13:39:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.806 13:39:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:19:31.806 13:39:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.806 13:39:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:31.806 [2024-11-20 13:39:31.053093] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:31.806 [2024-11-20 13:39:31.055126] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:31.806 [2024-11-20 13:39:31.055185] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:31.806 [2024-11-20 13:39:31.055236] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:19:31.806 [2024-11-20 13:39:31.055311] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:19:31.806 [2024-11-20 13:39:31.055326] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:19:31.806 [2024-11-20 13:39:31.055596] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:19:31.806 [2024-11-20 13:39:31.055757] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:19:31.806 [2024-11-20 13:39:31.055770] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:19:31.806 [2024-11-20 13:39:31.055928] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:31.806 13:39:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.806 13:39:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:19:31.806 13:39:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:31.806 13:39:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:31.806 13:39:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:31.806 13:39:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:31.806 13:39:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:31.806 13:39:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:31.806 13:39:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:31.806 13:39:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:31.806 13:39:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:31.806 13:39:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:31.806 13:39:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.806 13:39:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:31.806 13:39:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:31.806 13:39:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.806 13:39:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:31.806 "name": "raid_bdev1", 00:19:31.806 "uuid": "17a1d5c7-b0e0-4842-8eae-474ff3b99ce0", 00:19:31.806 "strip_size_kb": 0, 00:19:31.806 "state": "online", 00:19:31.806 "raid_level": "raid1", 00:19:31.806 "superblock": false, 00:19:31.806 "num_base_bdevs": 4, 00:19:31.806 "num_base_bdevs_discovered": 4, 00:19:31.806 "num_base_bdevs_operational": 4, 00:19:31.806 "base_bdevs_list": [ 00:19:31.806 { 00:19:31.806 "name": "BaseBdev1", 00:19:31.806 "uuid": "e6602c03-b1c1-5c17-8b51-772228df5ab5", 00:19:31.806 "is_configured": true, 00:19:31.806 "data_offset": 0, 00:19:31.806 "data_size": 65536 00:19:31.806 }, 00:19:31.806 { 00:19:31.806 "name": "BaseBdev2", 00:19:31.806 "uuid": "d84472af-afb8-54d9-9b1a-499dd50b80ab", 00:19:31.806 "is_configured": true, 00:19:31.806 "data_offset": 0, 00:19:31.806 "data_size": 65536 00:19:31.806 }, 00:19:31.806 { 00:19:31.806 "name": "BaseBdev3", 00:19:31.806 "uuid": "9763773c-120d-50e8-8fb6-9972cb887797", 00:19:31.806 "is_configured": true, 00:19:31.806 "data_offset": 0, 00:19:31.806 "data_size": 65536 00:19:31.806 }, 00:19:31.806 { 00:19:31.806 "name": "BaseBdev4", 00:19:31.806 "uuid": "1bf7356e-7458-522f-ba86-86cd0bcd0123", 00:19:31.806 "is_configured": true, 00:19:31.806 "data_offset": 0, 00:19:31.806 "data_size": 65536 00:19:31.806 } 00:19:31.806 ] 00:19:31.806 }' 00:19:31.806 13:39:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:31.806 13:39:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:32.065 13:39:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:32.065 13:39:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.065 13:39:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:32.065 13:39:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:19:32.065 [2024-11-20 13:39:31.472854] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:32.065 13:39:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.065 13:39:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:19:32.065 13:39:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:32.065 13:39:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:19:32.065 13:39:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.065 13:39:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:32.065 13:39:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.325 13:39:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:19:32.325 13:39:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:19:32.325 13:39:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:19:32.325 13:39:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:19:32.325 13:39:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.325 13:39:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:32.325 [2024-11-20 13:39:31.568346] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:32.325 13:39:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.325 13:39:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:19:32.325 13:39:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:32.325 13:39:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:32.325 13:39:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:32.325 13:39:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:32.325 13:39:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:32.325 13:39:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:32.325 13:39:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:32.325 13:39:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:32.325 13:39:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:32.325 13:39:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:32.325 13:39:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:32.325 13:39:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.325 13:39:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:32.325 13:39:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.325 13:39:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:32.325 "name": "raid_bdev1", 00:19:32.325 "uuid": "17a1d5c7-b0e0-4842-8eae-474ff3b99ce0", 00:19:32.325 "strip_size_kb": 0, 00:19:32.325 "state": "online", 00:19:32.325 "raid_level": "raid1", 00:19:32.325 "superblock": false, 00:19:32.325 "num_base_bdevs": 4, 00:19:32.325 "num_base_bdevs_discovered": 3, 00:19:32.325 "num_base_bdevs_operational": 3, 00:19:32.325 "base_bdevs_list": [ 00:19:32.325 { 00:19:32.325 "name": null, 00:19:32.325 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:32.325 "is_configured": false, 00:19:32.325 "data_offset": 0, 00:19:32.325 "data_size": 65536 00:19:32.325 }, 00:19:32.325 { 00:19:32.325 "name": "BaseBdev2", 00:19:32.325 "uuid": "d84472af-afb8-54d9-9b1a-499dd50b80ab", 00:19:32.325 "is_configured": true, 00:19:32.325 "data_offset": 0, 00:19:32.325 "data_size": 65536 00:19:32.325 }, 00:19:32.325 { 00:19:32.325 "name": "BaseBdev3", 00:19:32.325 "uuid": "9763773c-120d-50e8-8fb6-9972cb887797", 00:19:32.325 "is_configured": true, 00:19:32.325 "data_offset": 0, 00:19:32.325 "data_size": 65536 00:19:32.325 }, 00:19:32.325 { 00:19:32.325 "name": "BaseBdev4", 00:19:32.325 "uuid": "1bf7356e-7458-522f-ba86-86cd0bcd0123", 00:19:32.325 "is_configured": true, 00:19:32.325 "data_offset": 0, 00:19:32.325 "data_size": 65536 00:19:32.325 } 00:19:32.325 ] 00:19:32.325 }' 00:19:32.325 13:39:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:32.325 13:39:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:32.325 [2024-11-20 13:39:31.672899] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:19:32.325 I/O size of 3145728 is greater than zero copy threshold (65536). 00:19:32.325 Zero copy mechanism will not be used. 00:19:32.325 Running I/O for 60 seconds... 00:19:32.584 13:39:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:32.584 13:39:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.584 13:39:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:32.584 [2024-11-20 13:39:32.021721] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:32.584 13:39:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.584 13:39:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:19:32.889 [2024-11-20 13:39:32.087183] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:19:32.889 [2024-11-20 13:39:32.089519] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:32.889 [2024-11-20 13:39:32.227056] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:19:32.889 [2024-11-20 13:39:32.357170] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:19:32.889 [2024-11-20 13:39:32.358093] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:19:33.474 174.00 IOPS, 522.00 MiB/s [2024-11-20T13:39:32.959Z] [2024-11-20 13:39:32.688805] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:19:33.474 [2024-11-20 13:39:32.690249] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:19:33.474 [2024-11-20 13:39:32.924918] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:19:33.734 13:39:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:33.734 13:39:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:33.734 13:39:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:33.734 13:39:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:33.734 13:39:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:33.734 13:39:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:33.734 13:39:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:33.734 13:39:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.734 13:39:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:33.734 13:39:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.734 13:39:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:33.734 "name": "raid_bdev1", 00:19:33.734 "uuid": "17a1d5c7-b0e0-4842-8eae-474ff3b99ce0", 00:19:33.734 "strip_size_kb": 0, 00:19:33.734 "state": "online", 00:19:33.734 "raid_level": "raid1", 00:19:33.734 "superblock": false, 00:19:33.734 "num_base_bdevs": 4, 00:19:33.734 "num_base_bdevs_discovered": 4, 00:19:33.734 "num_base_bdevs_operational": 4, 00:19:33.734 "process": { 00:19:33.734 "type": "rebuild", 00:19:33.734 "target": "spare", 00:19:33.734 "progress": { 00:19:33.734 "blocks": 12288, 00:19:33.734 "percent": 18 00:19:33.734 } 00:19:33.734 }, 00:19:33.734 "base_bdevs_list": [ 00:19:33.734 { 00:19:33.734 "name": "spare", 00:19:33.734 "uuid": "a4bb89c5-c86b-5900-b65c-73516277f7e8", 00:19:33.734 "is_configured": true, 00:19:33.734 "data_offset": 0, 00:19:33.734 "data_size": 65536 00:19:33.734 }, 00:19:33.734 { 00:19:33.734 "name": "BaseBdev2", 00:19:33.734 "uuid": "d84472af-afb8-54d9-9b1a-499dd50b80ab", 00:19:33.734 "is_configured": true, 00:19:33.734 "data_offset": 0, 00:19:33.734 "data_size": 65536 00:19:33.734 }, 00:19:33.734 { 00:19:33.734 "name": "BaseBdev3", 00:19:33.734 "uuid": "9763773c-120d-50e8-8fb6-9972cb887797", 00:19:33.734 "is_configured": true, 00:19:33.734 "data_offset": 0, 00:19:33.734 "data_size": 65536 00:19:33.734 }, 00:19:33.734 { 00:19:33.734 "name": "BaseBdev4", 00:19:33.734 "uuid": "1bf7356e-7458-522f-ba86-86cd0bcd0123", 00:19:33.734 "is_configured": true, 00:19:33.734 "data_offset": 0, 00:19:33.734 "data_size": 65536 00:19:33.734 } 00:19:33.734 ] 00:19:33.734 }' 00:19:33.734 13:39:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:33.734 13:39:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:33.734 [2024-11-20 13:39:33.169955] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:19:33.734 13:39:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:33.734 [2024-11-20 13:39:33.176378] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:19:33.734 13:39:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:33.734 13:39:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:19:33.734 13:39:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.734 13:39:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:33.734 [2024-11-20 13:39:33.209459] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:33.993 [2024-11-20 13:39:33.287299] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:19:33.993 [2024-11-20 13:39:33.395380] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:33.993 [2024-11-20 13:39:33.399164] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:33.993 [2024-11-20 13:39:33.399335] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:33.993 [2024-11-20 13:39:33.399394] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:33.993 [2024-11-20 13:39:33.444840] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006220 00:19:33.993 13:39:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.993 13:39:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:19:33.993 13:39:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:33.993 13:39:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:33.993 13:39:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:33.993 13:39:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:33.993 13:39:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:33.993 13:39:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:33.993 13:39:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:33.993 13:39:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:33.993 13:39:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:33.993 13:39:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:33.993 13:39:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.993 13:39:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:33.993 13:39:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:34.252 13:39:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.252 13:39:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:34.253 "name": "raid_bdev1", 00:19:34.253 "uuid": "17a1d5c7-b0e0-4842-8eae-474ff3b99ce0", 00:19:34.253 "strip_size_kb": 0, 00:19:34.253 "state": "online", 00:19:34.253 "raid_level": "raid1", 00:19:34.253 "superblock": false, 00:19:34.253 "num_base_bdevs": 4, 00:19:34.253 "num_base_bdevs_discovered": 3, 00:19:34.253 "num_base_bdevs_operational": 3, 00:19:34.253 "base_bdevs_list": [ 00:19:34.253 { 00:19:34.253 "name": null, 00:19:34.253 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:34.253 "is_configured": false, 00:19:34.253 "data_offset": 0, 00:19:34.253 "data_size": 65536 00:19:34.253 }, 00:19:34.253 { 00:19:34.253 "name": "BaseBdev2", 00:19:34.253 "uuid": "d84472af-afb8-54d9-9b1a-499dd50b80ab", 00:19:34.253 "is_configured": true, 00:19:34.253 "data_offset": 0, 00:19:34.253 "data_size": 65536 00:19:34.253 }, 00:19:34.253 { 00:19:34.253 "name": "BaseBdev3", 00:19:34.253 "uuid": "9763773c-120d-50e8-8fb6-9972cb887797", 00:19:34.253 "is_configured": true, 00:19:34.253 "data_offset": 0, 00:19:34.253 "data_size": 65536 00:19:34.253 }, 00:19:34.253 { 00:19:34.253 "name": "BaseBdev4", 00:19:34.253 "uuid": "1bf7356e-7458-522f-ba86-86cd0bcd0123", 00:19:34.253 "is_configured": true, 00:19:34.253 "data_offset": 0, 00:19:34.253 "data_size": 65536 00:19:34.253 } 00:19:34.253 ] 00:19:34.253 }' 00:19:34.253 13:39:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:34.253 13:39:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:34.512 144.50 IOPS, 433.50 MiB/s [2024-11-20T13:39:33.997Z] 13:39:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:34.512 13:39:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:34.512 13:39:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:34.512 13:39:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:34.512 13:39:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:34.512 13:39:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:34.512 13:39:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:34.512 13:39:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.512 13:39:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:34.512 13:39:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.512 13:39:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:34.512 "name": "raid_bdev1", 00:19:34.512 "uuid": "17a1d5c7-b0e0-4842-8eae-474ff3b99ce0", 00:19:34.512 "strip_size_kb": 0, 00:19:34.512 "state": "online", 00:19:34.512 "raid_level": "raid1", 00:19:34.512 "superblock": false, 00:19:34.512 "num_base_bdevs": 4, 00:19:34.512 "num_base_bdevs_discovered": 3, 00:19:34.512 "num_base_bdevs_operational": 3, 00:19:34.512 "base_bdevs_list": [ 00:19:34.512 { 00:19:34.512 "name": null, 00:19:34.512 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:34.512 "is_configured": false, 00:19:34.512 "data_offset": 0, 00:19:34.512 "data_size": 65536 00:19:34.512 }, 00:19:34.512 { 00:19:34.512 "name": "BaseBdev2", 00:19:34.512 "uuid": "d84472af-afb8-54d9-9b1a-499dd50b80ab", 00:19:34.512 "is_configured": true, 00:19:34.512 "data_offset": 0, 00:19:34.512 "data_size": 65536 00:19:34.512 }, 00:19:34.512 { 00:19:34.512 "name": "BaseBdev3", 00:19:34.512 "uuid": "9763773c-120d-50e8-8fb6-9972cb887797", 00:19:34.512 "is_configured": true, 00:19:34.512 "data_offset": 0, 00:19:34.512 "data_size": 65536 00:19:34.512 }, 00:19:34.512 { 00:19:34.512 "name": "BaseBdev4", 00:19:34.513 "uuid": "1bf7356e-7458-522f-ba86-86cd0bcd0123", 00:19:34.513 "is_configured": true, 00:19:34.513 "data_offset": 0, 00:19:34.513 "data_size": 65536 00:19:34.513 } 00:19:34.513 ] 00:19:34.513 }' 00:19:34.513 13:39:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:34.772 13:39:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:34.772 13:39:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:34.772 13:39:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:34.772 13:39:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:34.772 13:39:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.772 13:39:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:34.772 [2024-11-20 13:39:34.064221] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:34.772 13:39:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.772 13:39:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:19:34.772 [2024-11-20 13:39:34.121418] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:19:34.772 [2024-11-20 13:39:34.123940] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:35.031 [2024-11-20 13:39:34.261590] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:19:35.031 [2024-11-20 13:39:34.479627] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:19:35.031 [2024-11-20 13:39:34.479947] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:19:35.555 145.67 IOPS, 437.00 MiB/s [2024-11-20T13:39:35.040Z] [2024-11-20 13:39:34.810036] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:19:35.555 [2024-11-20 13:39:34.810613] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:19:35.555 [2024-11-20 13:39:34.914427] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:19:35.820 13:39:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:35.820 13:39:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:35.820 13:39:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:35.820 13:39:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:35.820 13:39:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:35.820 13:39:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:35.820 13:39:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:35.820 13:39:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.820 13:39:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:35.820 13:39:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.820 13:39:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:35.820 "name": "raid_bdev1", 00:19:35.820 "uuid": "17a1d5c7-b0e0-4842-8eae-474ff3b99ce0", 00:19:35.820 "strip_size_kb": 0, 00:19:35.820 "state": "online", 00:19:35.820 "raid_level": "raid1", 00:19:35.820 "superblock": false, 00:19:35.820 "num_base_bdevs": 4, 00:19:35.820 "num_base_bdevs_discovered": 4, 00:19:35.820 "num_base_bdevs_operational": 4, 00:19:35.820 "process": { 00:19:35.820 "type": "rebuild", 00:19:35.820 "target": "spare", 00:19:35.820 "progress": { 00:19:35.820 "blocks": 12288, 00:19:35.820 "percent": 18 00:19:35.820 } 00:19:35.820 }, 00:19:35.820 "base_bdevs_list": [ 00:19:35.820 { 00:19:35.820 "name": "spare", 00:19:35.820 "uuid": "a4bb89c5-c86b-5900-b65c-73516277f7e8", 00:19:35.820 "is_configured": true, 00:19:35.820 "data_offset": 0, 00:19:35.820 "data_size": 65536 00:19:35.820 }, 00:19:35.820 { 00:19:35.820 "name": "BaseBdev2", 00:19:35.820 "uuid": "d84472af-afb8-54d9-9b1a-499dd50b80ab", 00:19:35.820 "is_configured": true, 00:19:35.820 "data_offset": 0, 00:19:35.820 "data_size": 65536 00:19:35.820 }, 00:19:35.820 { 00:19:35.820 "name": "BaseBdev3", 00:19:35.820 "uuid": "9763773c-120d-50e8-8fb6-9972cb887797", 00:19:35.820 "is_configured": true, 00:19:35.820 "data_offset": 0, 00:19:35.820 "data_size": 65536 00:19:35.820 }, 00:19:35.820 { 00:19:35.820 "name": "BaseBdev4", 00:19:35.820 "uuid": "1bf7356e-7458-522f-ba86-86cd0bcd0123", 00:19:35.820 "is_configured": true, 00:19:35.820 "data_offset": 0, 00:19:35.820 "data_size": 65536 00:19:35.820 } 00:19:35.820 ] 00:19:35.820 }' 00:19:35.820 13:39:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:35.820 [2024-11-20 13:39:35.168555] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:19:35.820 13:39:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:35.820 13:39:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:35.820 13:39:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:35.821 13:39:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:19:35.821 13:39:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:19:35.821 13:39:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:19:35.821 13:39:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:19:35.821 13:39:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:19:35.821 13:39:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.821 13:39:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:35.821 [2024-11-20 13:39:35.260740] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:19:36.087 [2024-11-20 13:39:35.396707] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:19:36.087 [2024-11-20 13:39:35.397021] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:19:36.087 [2024-11-20 13:39:35.404116] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006220 00:19:36.087 [2024-11-20 13:39:35.404147] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d0000063c0 00:19:36.087 13:39:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.087 13:39:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:19:36.087 13:39:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:19:36.087 13:39:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:36.087 13:39:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:36.087 13:39:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:36.087 13:39:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:36.087 13:39:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:36.087 13:39:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:36.087 13:39:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.087 13:39:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:36.087 13:39:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:36.088 13:39:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.088 13:39:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:36.088 "name": "raid_bdev1", 00:19:36.088 "uuid": "17a1d5c7-b0e0-4842-8eae-474ff3b99ce0", 00:19:36.088 "strip_size_kb": 0, 00:19:36.088 "state": "online", 00:19:36.088 "raid_level": "raid1", 00:19:36.088 "superblock": false, 00:19:36.088 "num_base_bdevs": 4, 00:19:36.088 "num_base_bdevs_discovered": 3, 00:19:36.088 "num_base_bdevs_operational": 3, 00:19:36.088 "process": { 00:19:36.088 "type": "rebuild", 00:19:36.088 "target": "spare", 00:19:36.088 "progress": { 00:19:36.088 "blocks": 16384, 00:19:36.088 "percent": 25 00:19:36.088 } 00:19:36.088 }, 00:19:36.088 "base_bdevs_list": [ 00:19:36.088 { 00:19:36.088 "name": "spare", 00:19:36.088 "uuid": "a4bb89c5-c86b-5900-b65c-73516277f7e8", 00:19:36.088 "is_configured": true, 00:19:36.088 "data_offset": 0, 00:19:36.088 "data_size": 65536 00:19:36.088 }, 00:19:36.088 { 00:19:36.088 "name": null, 00:19:36.088 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:36.088 "is_configured": false, 00:19:36.088 "data_offset": 0, 00:19:36.088 "data_size": 65536 00:19:36.088 }, 00:19:36.088 { 00:19:36.088 "name": "BaseBdev3", 00:19:36.088 "uuid": "9763773c-120d-50e8-8fb6-9972cb887797", 00:19:36.088 "is_configured": true, 00:19:36.088 "data_offset": 0, 00:19:36.088 "data_size": 65536 00:19:36.088 }, 00:19:36.088 { 00:19:36.088 "name": "BaseBdev4", 00:19:36.088 "uuid": "1bf7356e-7458-522f-ba86-86cd0bcd0123", 00:19:36.088 "is_configured": true, 00:19:36.088 "data_offset": 0, 00:19:36.088 "data_size": 65536 00:19:36.088 } 00:19:36.088 ] 00:19:36.088 }' 00:19:36.088 13:39:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:36.088 13:39:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:36.088 13:39:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:36.088 13:39:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:36.088 13:39:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=483 00:19:36.088 13:39:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:36.088 13:39:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:36.088 13:39:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:36.088 13:39:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:36.088 13:39:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:36.088 13:39:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:36.088 13:39:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:36.088 13:39:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:36.088 13:39:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.088 13:39:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:36.088 13:39:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.356 13:39:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:36.356 "name": "raid_bdev1", 00:19:36.356 "uuid": "17a1d5c7-b0e0-4842-8eae-474ff3b99ce0", 00:19:36.356 "strip_size_kb": 0, 00:19:36.356 "state": "online", 00:19:36.356 "raid_level": "raid1", 00:19:36.356 "superblock": false, 00:19:36.356 "num_base_bdevs": 4, 00:19:36.356 "num_base_bdevs_discovered": 3, 00:19:36.356 "num_base_bdevs_operational": 3, 00:19:36.356 "process": { 00:19:36.356 "type": "rebuild", 00:19:36.356 "target": "spare", 00:19:36.356 "progress": { 00:19:36.356 "blocks": 18432, 00:19:36.356 "percent": 28 00:19:36.356 } 00:19:36.356 }, 00:19:36.356 "base_bdevs_list": [ 00:19:36.356 { 00:19:36.356 "name": "spare", 00:19:36.356 "uuid": "a4bb89c5-c86b-5900-b65c-73516277f7e8", 00:19:36.356 "is_configured": true, 00:19:36.356 "data_offset": 0, 00:19:36.356 "data_size": 65536 00:19:36.356 }, 00:19:36.356 { 00:19:36.356 "name": null, 00:19:36.356 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:36.356 "is_configured": false, 00:19:36.356 "data_offset": 0, 00:19:36.356 "data_size": 65536 00:19:36.356 }, 00:19:36.356 { 00:19:36.356 "name": "BaseBdev3", 00:19:36.356 "uuid": "9763773c-120d-50e8-8fb6-9972cb887797", 00:19:36.356 "is_configured": true, 00:19:36.356 "data_offset": 0, 00:19:36.356 "data_size": 65536 00:19:36.356 }, 00:19:36.356 { 00:19:36.356 "name": "BaseBdev4", 00:19:36.356 "uuid": "1bf7356e-7458-522f-ba86-86cd0bcd0123", 00:19:36.356 "is_configured": true, 00:19:36.356 "data_offset": 0, 00:19:36.356 "data_size": 65536 00:19:36.356 } 00:19:36.356 ] 00:19:36.356 }' 00:19:36.356 13:39:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:36.356 13:39:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:36.356 13:39:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:36.356 13:39:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:36.356 13:39:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:36.625 134.00 IOPS, 402.00 MiB/s [2024-11-20T13:39:36.110Z] [2024-11-20 13:39:35.993344] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:19:36.897 [2024-11-20 13:39:36.225943] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:19:37.485 119.20 IOPS, 357.60 MiB/s [2024-11-20T13:39:36.970Z] 13:39:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:37.485 13:39:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:37.485 13:39:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:37.485 13:39:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:37.485 13:39:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:37.485 13:39:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:37.485 13:39:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:37.485 13:39:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.485 13:39:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:37.485 13:39:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:37.485 13:39:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.485 13:39:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:37.485 "name": "raid_bdev1", 00:19:37.485 "uuid": "17a1d5c7-b0e0-4842-8eae-474ff3b99ce0", 00:19:37.485 "strip_size_kb": 0, 00:19:37.485 "state": "online", 00:19:37.485 "raid_level": "raid1", 00:19:37.485 "superblock": false, 00:19:37.485 "num_base_bdevs": 4, 00:19:37.485 "num_base_bdevs_discovered": 3, 00:19:37.485 "num_base_bdevs_operational": 3, 00:19:37.485 "process": { 00:19:37.485 "type": "rebuild", 00:19:37.485 "target": "spare", 00:19:37.485 "progress": { 00:19:37.485 "blocks": 34816, 00:19:37.485 "percent": 53 00:19:37.485 } 00:19:37.485 }, 00:19:37.485 "base_bdevs_list": [ 00:19:37.485 { 00:19:37.485 "name": "spare", 00:19:37.485 "uuid": "a4bb89c5-c86b-5900-b65c-73516277f7e8", 00:19:37.485 "is_configured": true, 00:19:37.485 "data_offset": 0, 00:19:37.485 "data_size": 65536 00:19:37.485 }, 00:19:37.485 { 00:19:37.485 "name": null, 00:19:37.485 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:37.485 "is_configured": false, 00:19:37.485 "data_offset": 0, 00:19:37.485 "data_size": 65536 00:19:37.485 }, 00:19:37.485 { 00:19:37.485 "name": "BaseBdev3", 00:19:37.485 "uuid": "9763773c-120d-50e8-8fb6-9972cb887797", 00:19:37.485 "is_configured": true, 00:19:37.485 "data_offset": 0, 00:19:37.485 "data_size": 65536 00:19:37.485 }, 00:19:37.485 { 00:19:37.485 "name": "BaseBdev4", 00:19:37.485 "uuid": "1bf7356e-7458-522f-ba86-86cd0bcd0123", 00:19:37.485 "is_configured": true, 00:19:37.485 "data_offset": 0, 00:19:37.485 "data_size": 65536 00:19:37.485 } 00:19:37.485 ] 00:19:37.485 }' 00:19:37.485 13:39:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:37.485 13:39:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:37.485 13:39:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:37.485 13:39:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:37.485 13:39:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:38.054 [2024-11-20 13:39:37.272760] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:19:38.312 107.50 IOPS, 322.50 MiB/s [2024-11-20T13:39:37.798Z] 13:39:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:38.313 13:39:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:38.313 13:39:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:38.313 13:39:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:38.313 13:39:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:38.313 13:39:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:38.570 13:39:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:38.570 13:39:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.570 13:39:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:38.570 13:39:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:38.570 13:39:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.570 [2024-11-20 13:39:37.829261] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:19:38.570 13:39:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:38.570 "name": "raid_bdev1", 00:19:38.570 "uuid": "17a1d5c7-b0e0-4842-8eae-474ff3b99ce0", 00:19:38.570 "strip_size_kb": 0, 00:19:38.570 "state": "online", 00:19:38.570 "raid_level": "raid1", 00:19:38.570 "superblock": false, 00:19:38.570 "num_base_bdevs": 4, 00:19:38.570 "num_base_bdevs_discovered": 3, 00:19:38.570 "num_base_bdevs_operational": 3, 00:19:38.570 "process": { 00:19:38.570 "type": "rebuild", 00:19:38.570 "target": "spare", 00:19:38.570 "progress": { 00:19:38.570 "blocks": 55296, 00:19:38.570 "percent": 84 00:19:38.570 } 00:19:38.570 }, 00:19:38.570 "base_bdevs_list": [ 00:19:38.570 { 00:19:38.570 "name": "spare", 00:19:38.570 "uuid": "a4bb89c5-c86b-5900-b65c-73516277f7e8", 00:19:38.570 "is_configured": true, 00:19:38.570 "data_offset": 0, 00:19:38.570 "data_size": 65536 00:19:38.570 }, 00:19:38.570 { 00:19:38.570 "name": null, 00:19:38.570 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:38.570 "is_configured": false, 00:19:38.570 "data_offset": 0, 00:19:38.570 "data_size": 65536 00:19:38.570 }, 00:19:38.570 { 00:19:38.571 "name": "BaseBdev3", 00:19:38.571 "uuid": "9763773c-120d-50e8-8fb6-9972cb887797", 00:19:38.571 "is_configured": true, 00:19:38.571 "data_offset": 0, 00:19:38.571 "data_size": 65536 00:19:38.571 }, 00:19:38.571 { 00:19:38.571 "name": "BaseBdev4", 00:19:38.571 "uuid": "1bf7356e-7458-522f-ba86-86cd0bcd0123", 00:19:38.571 "is_configured": true, 00:19:38.571 "data_offset": 0, 00:19:38.571 "data_size": 65536 00:19:38.571 } 00:19:38.571 ] 00:19:38.571 }' 00:19:38.571 13:39:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:38.571 13:39:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:38.571 13:39:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:38.571 13:39:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:38.571 13:39:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:38.571 [2024-11-20 13:39:38.037547] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:19:39.138 [2024-11-20 13:39:38.370730] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:19:39.138 [2024-11-20 13:39:38.470536] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:19:39.138 [2024-11-20 13:39:38.479544] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:39.656 98.29 IOPS, 294.86 MiB/s [2024-11-20T13:39:39.141Z] 13:39:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:39.656 13:39:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:39.656 13:39:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:39.656 13:39:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:39.656 13:39:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:39.656 13:39:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:39.656 13:39:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:39.656 13:39:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.656 13:39:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:39.656 13:39:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:39.656 13:39:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.656 13:39:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:39.656 "name": "raid_bdev1", 00:19:39.656 "uuid": "17a1d5c7-b0e0-4842-8eae-474ff3b99ce0", 00:19:39.656 "strip_size_kb": 0, 00:19:39.656 "state": "online", 00:19:39.656 "raid_level": "raid1", 00:19:39.656 "superblock": false, 00:19:39.656 "num_base_bdevs": 4, 00:19:39.656 "num_base_bdevs_discovered": 3, 00:19:39.656 "num_base_bdevs_operational": 3, 00:19:39.656 "base_bdevs_list": [ 00:19:39.656 { 00:19:39.656 "name": "spare", 00:19:39.656 "uuid": "a4bb89c5-c86b-5900-b65c-73516277f7e8", 00:19:39.656 "is_configured": true, 00:19:39.656 "data_offset": 0, 00:19:39.656 "data_size": 65536 00:19:39.656 }, 00:19:39.656 { 00:19:39.656 "name": null, 00:19:39.656 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:39.656 "is_configured": false, 00:19:39.656 "data_offset": 0, 00:19:39.656 "data_size": 65536 00:19:39.656 }, 00:19:39.656 { 00:19:39.656 "name": "BaseBdev3", 00:19:39.656 "uuid": "9763773c-120d-50e8-8fb6-9972cb887797", 00:19:39.656 "is_configured": true, 00:19:39.656 "data_offset": 0, 00:19:39.656 "data_size": 65536 00:19:39.656 }, 00:19:39.656 { 00:19:39.656 "name": "BaseBdev4", 00:19:39.656 "uuid": "1bf7356e-7458-522f-ba86-86cd0bcd0123", 00:19:39.656 "is_configured": true, 00:19:39.656 "data_offset": 0, 00:19:39.656 "data_size": 65536 00:19:39.656 } 00:19:39.656 ] 00:19:39.656 }' 00:19:39.656 13:39:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:39.656 13:39:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:19:39.656 13:39:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:39.656 13:39:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:19:39.656 13:39:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 00:19:39.656 13:39:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:39.656 13:39:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:39.656 13:39:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:39.656 13:39:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:39.656 13:39:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:39.656 13:39:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:39.656 13:39:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:39.656 13:39:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.656 13:39:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:39.656 13:39:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.656 13:39:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:39.656 "name": "raid_bdev1", 00:19:39.656 "uuid": "17a1d5c7-b0e0-4842-8eae-474ff3b99ce0", 00:19:39.656 "strip_size_kb": 0, 00:19:39.656 "state": "online", 00:19:39.656 "raid_level": "raid1", 00:19:39.656 "superblock": false, 00:19:39.656 "num_base_bdevs": 4, 00:19:39.656 "num_base_bdevs_discovered": 3, 00:19:39.656 "num_base_bdevs_operational": 3, 00:19:39.656 "base_bdevs_list": [ 00:19:39.656 { 00:19:39.656 "name": "spare", 00:19:39.656 "uuid": "a4bb89c5-c86b-5900-b65c-73516277f7e8", 00:19:39.656 "is_configured": true, 00:19:39.657 "data_offset": 0, 00:19:39.657 "data_size": 65536 00:19:39.657 }, 00:19:39.657 { 00:19:39.657 "name": null, 00:19:39.657 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:39.657 "is_configured": false, 00:19:39.657 "data_offset": 0, 00:19:39.657 "data_size": 65536 00:19:39.657 }, 00:19:39.657 { 00:19:39.657 "name": "BaseBdev3", 00:19:39.657 "uuid": "9763773c-120d-50e8-8fb6-9972cb887797", 00:19:39.657 "is_configured": true, 00:19:39.657 "data_offset": 0, 00:19:39.657 "data_size": 65536 00:19:39.657 }, 00:19:39.657 { 00:19:39.657 "name": "BaseBdev4", 00:19:39.657 "uuid": "1bf7356e-7458-522f-ba86-86cd0bcd0123", 00:19:39.657 "is_configured": true, 00:19:39.657 "data_offset": 0, 00:19:39.657 "data_size": 65536 00:19:39.657 } 00:19:39.657 ] 00:19:39.657 }' 00:19:39.657 13:39:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:39.657 13:39:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:39.657 13:39:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:39.916 13:39:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:39.916 13:39:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:19:39.916 13:39:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:39.916 13:39:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:39.916 13:39:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:39.916 13:39:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:39.916 13:39:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:39.916 13:39:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:39.916 13:39:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:39.916 13:39:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:39.916 13:39:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:39.916 13:39:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:39.916 13:39:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:39.916 13:39:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.916 13:39:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:39.916 13:39:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.916 13:39:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:39.916 "name": "raid_bdev1", 00:19:39.916 "uuid": "17a1d5c7-b0e0-4842-8eae-474ff3b99ce0", 00:19:39.916 "strip_size_kb": 0, 00:19:39.917 "state": "online", 00:19:39.917 "raid_level": "raid1", 00:19:39.917 "superblock": false, 00:19:39.917 "num_base_bdevs": 4, 00:19:39.917 "num_base_bdevs_discovered": 3, 00:19:39.917 "num_base_bdevs_operational": 3, 00:19:39.917 "base_bdevs_list": [ 00:19:39.917 { 00:19:39.917 "name": "spare", 00:19:39.917 "uuid": "a4bb89c5-c86b-5900-b65c-73516277f7e8", 00:19:39.917 "is_configured": true, 00:19:39.917 "data_offset": 0, 00:19:39.917 "data_size": 65536 00:19:39.917 }, 00:19:39.917 { 00:19:39.917 "name": null, 00:19:39.917 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:39.917 "is_configured": false, 00:19:39.917 "data_offset": 0, 00:19:39.917 "data_size": 65536 00:19:39.917 }, 00:19:39.917 { 00:19:39.917 "name": "BaseBdev3", 00:19:39.917 "uuid": "9763773c-120d-50e8-8fb6-9972cb887797", 00:19:39.917 "is_configured": true, 00:19:39.917 "data_offset": 0, 00:19:39.917 "data_size": 65536 00:19:39.917 }, 00:19:39.917 { 00:19:39.917 "name": "BaseBdev4", 00:19:39.917 "uuid": "1bf7356e-7458-522f-ba86-86cd0bcd0123", 00:19:39.917 "is_configured": true, 00:19:39.917 "data_offset": 0, 00:19:39.917 "data_size": 65536 00:19:39.917 } 00:19:39.917 ] 00:19:39.917 }' 00:19:39.917 13:39:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:39.917 13:39:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:40.184 13:39:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:40.185 13:39:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.185 13:39:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:40.185 [2024-11-20 13:39:39.586606] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:40.185 [2024-11-20 13:39:39.586660] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:40.444 90.62 IOPS, 271.88 MiB/s 00:19:40.444 Latency(us) 00:19:40.444 [2024-11-20T13:39:39.929Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:40.444 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:19:40.444 raid_bdev1 : 8.02 90.57 271.71 0.00 0.00 15829.94 309.26 116227.70 00:19:40.444 [2024-11-20T13:39:39.929Z] =================================================================================================================== 00:19:40.444 [2024-11-20T13:39:39.929Z] Total : 90.57 271.71 0.00 0.00 15829.94 309.26 116227.70 00:19:40.444 [2024-11-20 13:39:39.701337] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:40.444 [2024-11-20 13:39:39.701531] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:40.444 [2024-11-20 13:39:39.701665] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:40.444 [2024-11-20 13:39:39.701784] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:19:40.444 { 00:19:40.444 "results": [ 00:19:40.444 { 00:19:40.444 "job": "raid_bdev1", 00:19:40.444 "core_mask": "0x1", 00:19:40.444 "workload": "randrw", 00:19:40.444 "percentage": 50, 00:19:40.444 "status": "finished", 00:19:40.444 "queue_depth": 2, 00:19:40.444 "io_size": 3145728, 00:19:40.444 "runtime": 8.015773, 00:19:40.444 "iops": 90.5714271100242, 00:19:40.444 "mibps": 271.7142813300726, 00:19:40.444 "io_failed": 0, 00:19:40.444 "io_timeout": 0, 00:19:40.444 "avg_latency_us": 15829.939721420118, 00:19:40.444 "min_latency_us": 309.2562248995984, 00:19:40.444 "max_latency_us": 116227.70120481927 00:19:40.444 } 00:19:40.444 ], 00:19:40.444 "core_count": 1 00:19:40.444 } 00:19:40.444 13:39:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.444 13:39:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:40.444 13:39:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.444 13:39:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:40.444 13:39:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 00:19:40.444 13:39:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.444 13:39:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:19:40.444 13:39:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:19:40.444 13:39:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:19:40.444 13:39:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:19:40.444 13:39:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:19:40.444 13:39:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:19:40.444 13:39:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:40.444 13:39:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:19:40.444 13:39:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:40.444 13:39:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:19:40.444 13:39:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:40.444 13:39:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:40.444 13:39:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:19:40.703 /dev/nbd0 00:19:40.703 13:39:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:40.703 13:39:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:40.703 13:39:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:19:40.703 13:39:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:19:40.703 13:39:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:40.703 13:39:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:40.703 13:39:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:19:40.703 13:39:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:19:40.703 13:39:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:40.703 13:39:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:40.703 13:39:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:40.703 1+0 records in 00:19:40.703 1+0 records out 00:19:40.703 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000437351 s, 9.4 MB/s 00:19:40.703 13:39:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:40.703 13:39:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:19:40.703 13:39:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:40.704 13:39:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:40.704 13:39:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:19:40.704 13:39:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:40.704 13:39:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:40.704 13:39:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:19:40.704 13:39:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 00:19:40.704 13:39:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@728 -- # continue 00:19:40.704 13:39:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:19:40.704 13:39:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 00:19:40.704 13:39:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 00:19:40.704 13:39:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:19:40.704 13:39:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:19:40.704 13:39:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:40.704 13:39:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:19:40.704 13:39:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:40.704 13:39:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:19:40.704 13:39:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:40.704 13:39:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:40.704 13:39:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:19:40.962 /dev/nbd1 00:19:40.962 13:39:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:19:40.962 13:39:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:19:40.962 13:39:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:19:40.962 13:39:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:19:40.962 13:39:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:40.962 13:39:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:40.962 13:39:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:19:40.962 13:39:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:19:40.962 13:39:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:40.962 13:39:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:40.962 13:39:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:40.962 1+0 records in 00:19:40.962 1+0 records out 00:19:40.962 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000418585 s, 9.8 MB/s 00:19:40.962 13:39:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:40.962 13:39:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:19:40.962 13:39:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:40.962 13:39:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:40.962 13:39:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:19:40.962 13:39:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:40.962 13:39:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:40.962 13:39:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:19:41.221 13:39:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:19:41.221 13:39:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:19:41.221 13:39:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:19:41.221 13:39:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:41.221 13:39:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:19:41.221 13:39:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:41.221 13:39:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:19:41.481 13:39:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:19:41.481 13:39:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:19:41.481 13:39:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:19:41.481 13:39:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:41.481 13:39:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:41.481 13:39:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:19:41.481 13:39:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:19:41.481 13:39:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:19:41.481 13:39:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:19:41.481 13:39:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 00:19:41.481 13:39:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 00:19:41.481 13:39:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:19:41.481 13:39:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:19:41.481 13:39:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:41.481 13:39:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:19:41.481 13:39:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:41.481 13:39:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:19:41.481 13:39:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:41.481 13:39:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:41.481 13:39:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:19:41.740 /dev/nbd1 00:19:41.740 13:39:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:19:41.740 13:39:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:19:41.740 13:39:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:19:41.740 13:39:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:19:41.740 13:39:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:41.740 13:39:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:41.740 13:39:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:19:41.740 13:39:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:19:41.740 13:39:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:41.740 13:39:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:41.740 13:39:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:41.740 1+0 records in 00:19:41.740 1+0 records out 00:19:41.740 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000389906 s, 10.5 MB/s 00:19:41.740 13:39:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:41.740 13:39:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:19:41.740 13:39:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:41.740 13:39:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:41.740 13:39:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:19:41.740 13:39:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:41.740 13:39:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:41.740 13:39:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:19:41.740 13:39:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:19:41.740 13:39:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:19:41.741 13:39:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:19:41.741 13:39:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:41.741 13:39:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:19:41.741 13:39:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:41.741 13:39:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:19:42.000 13:39:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:19:42.000 13:39:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:19:42.000 13:39:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:19:42.000 13:39:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:42.000 13:39:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:42.000 13:39:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:19:42.000 13:39:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:19:42.000 13:39:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:19:42.000 13:39:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:19:42.000 13:39:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:19:42.000 13:39:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:19:42.000 13:39:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:42.000 13:39:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:19:42.000 13:39:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:42.000 13:39:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:19:42.275 13:39:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:42.275 13:39:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:42.275 13:39:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:42.275 13:39:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:42.275 13:39:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:42.275 13:39:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:42.275 13:39:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:19:42.275 13:39:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:19:42.275 13:39:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:19:42.275 13:39:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 78552 00:19:42.275 13:39:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # '[' -z 78552 ']' 00:19:42.275 13:39:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@958 -- # kill -0 78552 00:19:42.275 13:39:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # uname 00:19:42.275 13:39:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:42.275 13:39:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78552 00:19:42.275 13:39:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:42.275 13:39:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:42.275 killing process with pid 78552 00:19:42.275 Received shutdown signal, test time was about 10.025184 seconds 00:19:42.275 00:19:42.275 Latency(us) 00:19:42.275 [2024-11-20T13:39:41.760Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:42.275 [2024-11-20T13:39:41.760Z] =================================================================================================================== 00:19:42.275 [2024-11-20T13:39:41.760Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:42.275 13:39:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78552' 00:19:42.275 13:39:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@973 -- # kill 78552 00:19:42.275 13:39:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@978 -- # wait 78552 00:19:42.275 [2024-11-20 13:39:41.684573] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:42.841 [2024-11-20 13:39:42.110552] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:44.219 13:39:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 00:19:44.219 00:19:44.219 real 0m13.615s 00:19:44.219 user 0m17.031s 00:19:44.219 sys 0m2.077s 00:19:44.219 13:39:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:44.219 13:39:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:44.219 ************************************ 00:19:44.219 END TEST raid_rebuild_test_io 00:19:44.219 ************************************ 00:19:44.219 13:39:43 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 4 true true true 00:19:44.219 13:39:43 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:19:44.219 13:39:43 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:44.219 13:39:43 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:44.219 ************************************ 00:19:44.219 START TEST raid_rebuild_test_sb_io 00:19:44.219 ************************************ 00:19:44.219 13:39:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 true true true 00:19:44.219 13:39:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:19:44.219 13:39:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:19:44.219 13:39:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:19:44.219 13:39:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:19:44.219 13:39:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:19:44.219 13:39:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:19:44.219 13:39:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:44.219 13:39:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:19:44.219 13:39:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:19:44.219 13:39:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:44.219 13:39:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:19:44.219 13:39:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:19:44.219 13:39:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:44.219 13:39:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:19:44.219 13:39:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:19:44.219 13:39:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:44.219 13:39:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:19:44.219 13:39:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:19:44.219 13:39:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:44.219 13:39:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:19:44.219 13:39:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:19:44.219 13:39:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:19:44.219 13:39:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:19:44.219 13:39:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:19:44.219 13:39:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:19:44.219 13:39:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:19:44.219 13:39:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:19:44.219 13:39:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:19:44.219 13:39:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:19:44.219 13:39:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:19:44.219 13:39:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=78966 00:19:44.219 13:39:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:19:44.219 13:39:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 78966 00:19:44.219 13:39:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@835 -- # '[' -z 78966 ']' 00:19:44.219 13:39:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:44.219 13:39:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:44.219 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:44.219 13:39:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:44.219 13:39:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:44.219 13:39:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:44.219 [2024-11-20 13:39:43.548474] Starting SPDK v25.01-pre git sha1 82b85d9ca / DPDK 24.03.0 initialization... 00:19:44.219 I/O size of 3145728 is greater than zero copy threshold (65536). 00:19:44.219 Zero copy mechanism will not be used. 00:19:44.219 [2024-11-20 13:39:43.548628] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78966 ] 00:19:44.478 [2024-11-20 13:39:43.735135] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:44.478 [2024-11-20 13:39:43.860608] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:44.736 [2024-11-20 13:39:44.086142] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:44.736 [2024-11-20 13:39:44.086216] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:44.995 13:39:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:44.996 13:39:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # return 0 00:19:44.996 13:39:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:44.996 13:39:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:19:44.996 13:39:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.996 13:39:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:44.996 BaseBdev1_malloc 00:19:44.996 13:39:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.996 13:39:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:19:44.996 13:39:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.996 13:39:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:44.996 [2024-11-20 13:39:44.459785] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:19:44.996 [2024-11-20 13:39:44.459867] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:44.996 [2024-11-20 13:39:44.459902] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:19:44.996 [2024-11-20 13:39:44.459919] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:44.996 [2024-11-20 13:39:44.462611] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:44.996 [2024-11-20 13:39:44.462661] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:44.996 BaseBdev1 00:19:44.996 13:39:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.996 13:39:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:44.996 13:39:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:19:44.996 13:39:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.996 13:39:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:45.255 BaseBdev2_malloc 00:19:45.255 13:39:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.255 13:39:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:19:45.255 13:39:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.255 13:39:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:45.255 [2024-11-20 13:39:44.514692] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:19:45.255 [2024-11-20 13:39:44.514767] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:45.255 [2024-11-20 13:39:44.514806] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:19:45.255 [2024-11-20 13:39:44.514824] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:45.255 [2024-11-20 13:39:44.517413] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:45.255 [2024-11-20 13:39:44.517459] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:19:45.255 BaseBdev2 00:19:45.255 13:39:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.255 13:39:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:45.255 13:39:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:19:45.255 13:39:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.255 13:39:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:45.255 BaseBdev3_malloc 00:19:45.255 13:39:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.255 13:39:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:19:45.255 13:39:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.255 13:39:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:45.255 [2024-11-20 13:39:44.577598] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:19:45.255 [2024-11-20 13:39:44.577662] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:45.255 [2024-11-20 13:39:44.577686] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:19:45.255 [2024-11-20 13:39:44.577702] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:45.255 [2024-11-20 13:39:44.580288] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:45.255 [2024-11-20 13:39:44.580336] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:19:45.255 BaseBdev3 00:19:45.255 13:39:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.255 13:39:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:45.255 13:39:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:19:45.255 13:39:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.255 13:39:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:45.255 BaseBdev4_malloc 00:19:45.255 13:39:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.255 13:39:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:19:45.255 13:39:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.255 13:39:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:45.255 [2024-11-20 13:39:44.633262] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:19:45.255 [2024-11-20 13:39:44.633331] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:45.255 [2024-11-20 13:39:44.633354] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:19:45.255 [2024-11-20 13:39:44.633370] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:45.255 [2024-11-20 13:39:44.635955] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:45.255 [2024-11-20 13:39:44.636006] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:19:45.255 BaseBdev4 00:19:45.255 13:39:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.255 13:39:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:19:45.255 13:39:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.255 13:39:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:45.255 spare_malloc 00:19:45.255 13:39:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.255 13:39:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:19:45.255 13:39:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.255 13:39:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:45.255 spare_delay 00:19:45.255 13:39:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.255 13:39:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:19:45.255 13:39:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.255 13:39:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:45.255 [2024-11-20 13:39:44.700784] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:45.255 [2024-11-20 13:39:44.700876] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:45.255 [2024-11-20 13:39:44.700899] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:19:45.255 [2024-11-20 13:39:44.700913] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:45.255 [2024-11-20 13:39:44.703452] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:45.255 [2024-11-20 13:39:44.703502] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:45.255 spare 00:19:45.255 13:39:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.255 13:39:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:19:45.255 13:39:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.255 13:39:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:45.255 [2024-11-20 13:39:44.712835] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:45.255 [2024-11-20 13:39:44.715091] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:45.255 [2024-11-20 13:39:44.715163] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:45.255 [2024-11-20 13:39:44.715220] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:19:45.255 [2024-11-20 13:39:44.715413] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:19:45.255 [2024-11-20 13:39:44.715436] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:19:45.255 [2024-11-20 13:39:44.715726] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:19:45.255 [2024-11-20 13:39:44.715959] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:19:45.255 [2024-11-20 13:39:44.715979] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:19:45.255 [2024-11-20 13:39:44.716151] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:45.256 13:39:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.256 13:39:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:19:45.256 13:39:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:45.256 13:39:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:45.256 13:39:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:45.256 13:39:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:45.256 13:39:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:45.256 13:39:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:45.256 13:39:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:45.256 13:39:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:45.256 13:39:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:45.256 13:39:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:45.256 13:39:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:45.256 13:39:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.256 13:39:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:45.256 13:39:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.515 13:39:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:45.515 "name": "raid_bdev1", 00:19:45.515 "uuid": "6832da1e-ecc2-46a8-bda4-fb92ecf1e510", 00:19:45.515 "strip_size_kb": 0, 00:19:45.515 "state": "online", 00:19:45.515 "raid_level": "raid1", 00:19:45.515 "superblock": true, 00:19:45.515 "num_base_bdevs": 4, 00:19:45.515 "num_base_bdevs_discovered": 4, 00:19:45.515 "num_base_bdevs_operational": 4, 00:19:45.515 "base_bdevs_list": [ 00:19:45.515 { 00:19:45.515 "name": "BaseBdev1", 00:19:45.515 "uuid": "91b76f73-864d-5b92-8c55-aec9aff67594", 00:19:45.515 "is_configured": true, 00:19:45.515 "data_offset": 2048, 00:19:45.515 "data_size": 63488 00:19:45.515 }, 00:19:45.515 { 00:19:45.515 "name": "BaseBdev2", 00:19:45.515 "uuid": "172b6f95-680d-5a4b-8912-3904c9258e79", 00:19:45.515 "is_configured": true, 00:19:45.515 "data_offset": 2048, 00:19:45.515 "data_size": 63488 00:19:45.515 }, 00:19:45.515 { 00:19:45.515 "name": "BaseBdev3", 00:19:45.515 "uuid": "c84d4388-6d5e-597e-bd6b-f9009998a1ac", 00:19:45.515 "is_configured": true, 00:19:45.515 "data_offset": 2048, 00:19:45.515 "data_size": 63488 00:19:45.515 }, 00:19:45.515 { 00:19:45.515 "name": "BaseBdev4", 00:19:45.515 "uuid": "a94b6492-fd42-5f05-a428-94faf6901bb8", 00:19:45.515 "is_configured": true, 00:19:45.515 "data_offset": 2048, 00:19:45.515 "data_size": 63488 00:19:45.515 } 00:19:45.515 ] 00:19:45.515 }' 00:19:45.515 13:39:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:45.515 13:39:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:45.773 13:39:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:19:45.773 13:39:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:45.773 13:39:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.773 13:39:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:45.773 [2024-11-20 13:39:45.152554] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:45.773 13:39:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.773 13:39:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:19:45.773 13:39:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:19:45.773 13:39:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:45.773 13:39:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.773 13:39:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:45.773 13:39:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.773 13:39:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:19:45.773 13:39:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:19:45.773 13:39:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:19:45.773 13:39:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:19:45.773 13:39:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.773 13:39:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:45.773 [2024-11-20 13:39:45.224081] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:45.774 13:39:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.774 13:39:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:19:45.774 13:39:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:45.774 13:39:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:45.774 13:39:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:45.774 13:39:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:45.774 13:39:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:45.774 13:39:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:45.774 13:39:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:45.774 13:39:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:45.774 13:39:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:45.774 13:39:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:45.774 13:39:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.774 13:39:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:45.774 13:39:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:45.774 13:39:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.032 13:39:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:46.032 "name": "raid_bdev1", 00:19:46.032 "uuid": "6832da1e-ecc2-46a8-bda4-fb92ecf1e510", 00:19:46.032 "strip_size_kb": 0, 00:19:46.032 "state": "online", 00:19:46.032 "raid_level": "raid1", 00:19:46.032 "superblock": true, 00:19:46.032 "num_base_bdevs": 4, 00:19:46.032 "num_base_bdevs_discovered": 3, 00:19:46.032 "num_base_bdevs_operational": 3, 00:19:46.032 "base_bdevs_list": [ 00:19:46.032 { 00:19:46.032 "name": null, 00:19:46.032 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:46.032 "is_configured": false, 00:19:46.032 "data_offset": 0, 00:19:46.032 "data_size": 63488 00:19:46.032 }, 00:19:46.032 { 00:19:46.032 "name": "BaseBdev2", 00:19:46.032 "uuid": "172b6f95-680d-5a4b-8912-3904c9258e79", 00:19:46.032 "is_configured": true, 00:19:46.032 "data_offset": 2048, 00:19:46.032 "data_size": 63488 00:19:46.032 }, 00:19:46.032 { 00:19:46.032 "name": "BaseBdev3", 00:19:46.032 "uuid": "c84d4388-6d5e-597e-bd6b-f9009998a1ac", 00:19:46.032 "is_configured": true, 00:19:46.032 "data_offset": 2048, 00:19:46.032 "data_size": 63488 00:19:46.032 }, 00:19:46.032 { 00:19:46.032 "name": "BaseBdev4", 00:19:46.032 "uuid": "a94b6492-fd42-5f05-a428-94faf6901bb8", 00:19:46.032 "is_configured": true, 00:19:46.032 "data_offset": 2048, 00:19:46.032 "data_size": 63488 00:19:46.032 } 00:19:46.032 ] 00:19:46.032 }' 00:19:46.032 13:39:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:46.032 13:39:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:46.032 [2024-11-20 13:39:45.321079] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:19:46.032 I/O size of 3145728 is greater than zero copy threshold (65536). 00:19:46.032 Zero copy mechanism will not be used. 00:19:46.032 Running I/O for 60 seconds... 00:19:46.290 13:39:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:46.290 13:39:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.290 13:39:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:46.290 [2024-11-20 13:39:45.597121] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:46.290 13:39:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.290 13:39:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:19:46.290 [2024-11-20 13:39:45.664966] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:19:46.290 [2024-11-20 13:39:45.667367] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:46.566 [2024-11-20 13:39:45.791913] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:19:46.566 [2024-11-20 13:39:45.793344] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:19:46.566 [2024-11-20 13:39:46.021912] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:19:46.566 [2024-11-20 13:39:46.022719] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:19:47.131 117.00 IOPS, 351.00 MiB/s [2024-11-20T13:39:46.616Z] [2024-11-20 13:39:46.362562] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:19:47.131 [2024-11-20 13:39:46.588026] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:19:47.131 [2024-11-20 13:39:46.588377] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:19:47.390 13:39:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:47.390 13:39:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:47.391 13:39:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:47.391 13:39:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:47.391 13:39:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:47.391 13:39:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:47.391 13:39:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:47.391 13:39:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.391 13:39:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:47.391 13:39:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.391 13:39:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:47.391 "name": "raid_bdev1", 00:19:47.391 "uuid": "6832da1e-ecc2-46a8-bda4-fb92ecf1e510", 00:19:47.391 "strip_size_kb": 0, 00:19:47.391 "state": "online", 00:19:47.391 "raid_level": "raid1", 00:19:47.391 "superblock": true, 00:19:47.391 "num_base_bdevs": 4, 00:19:47.391 "num_base_bdevs_discovered": 4, 00:19:47.391 "num_base_bdevs_operational": 4, 00:19:47.391 "process": { 00:19:47.391 "type": "rebuild", 00:19:47.391 "target": "spare", 00:19:47.391 "progress": { 00:19:47.391 "blocks": 10240, 00:19:47.391 "percent": 16 00:19:47.391 } 00:19:47.391 }, 00:19:47.391 "base_bdevs_list": [ 00:19:47.391 { 00:19:47.391 "name": "spare", 00:19:47.391 "uuid": "91011f87-369e-5f97-92f1-3c4e6c2eb5a2", 00:19:47.391 "is_configured": true, 00:19:47.391 "data_offset": 2048, 00:19:47.391 "data_size": 63488 00:19:47.391 }, 00:19:47.391 { 00:19:47.391 "name": "BaseBdev2", 00:19:47.391 "uuid": "172b6f95-680d-5a4b-8912-3904c9258e79", 00:19:47.391 "is_configured": true, 00:19:47.391 "data_offset": 2048, 00:19:47.391 "data_size": 63488 00:19:47.391 }, 00:19:47.391 { 00:19:47.391 "name": "BaseBdev3", 00:19:47.391 "uuid": "c84d4388-6d5e-597e-bd6b-f9009998a1ac", 00:19:47.391 "is_configured": true, 00:19:47.391 "data_offset": 2048, 00:19:47.391 "data_size": 63488 00:19:47.391 }, 00:19:47.391 { 00:19:47.391 "name": "BaseBdev4", 00:19:47.391 "uuid": "a94b6492-fd42-5f05-a428-94faf6901bb8", 00:19:47.391 "is_configured": true, 00:19:47.391 "data_offset": 2048, 00:19:47.391 "data_size": 63488 00:19:47.391 } 00:19:47.391 ] 00:19:47.391 }' 00:19:47.391 13:39:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:47.391 13:39:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:47.391 13:39:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:47.391 13:39:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:47.391 13:39:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:19:47.391 13:39:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.391 13:39:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:47.391 [2024-11-20 13:39:46.784256] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:47.688 [2024-11-20 13:39:46.901037] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:47.688 [2024-11-20 13:39:46.913240] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:47.688 [2024-11-20 13:39:46.913341] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:47.688 [2024-11-20 13:39:46.913358] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:47.688 [2024-11-20 13:39:46.939701] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006220 00:19:47.688 13:39:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.688 13:39:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:19:47.688 13:39:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:47.688 13:39:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:47.688 13:39:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:47.688 13:39:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:47.688 13:39:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:47.688 13:39:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:47.689 13:39:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:47.689 13:39:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:47.689 13:39:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:47.689 13:39:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:47.689 13:39:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.689 13:39:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:47.689 13:39:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:47.689 13:39:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.689 13:39:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:47.689 "name": "raid_bdev1", 00:19:47.689 "uuid": "6832da1e-ecc2-46a8-bda4-fb92ecf1e510", 00:19:47.689 "strip_size_kb": 0, 00:19:47.689 "state": "online", 00:19:47.689 "raid_level": "raid1", 00:19:47.689 "superblock": true, 00:19:47.689 "num_base_bdevs": 4, 00:19:47.689 "num_base_bdevs_discovered": 3, 00:19:47.689 "num_base_bdevs_operational": 3, 00:19:47.689 "base_bdevs_list": [ 00:19:47.689 { 00:19:47.689 "name": null, 00:19:47.689 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:47.689 "is_configured": false, 00:19:47.689 "data_offset": 0, 00:19:47.689 "data_size": 63488 00:19:47.689 }, 00:19:47.689 { 00:19:47.689 "name": "BaseBdev2", 00:19:47.689 "uuid": "172b6f95-680d-5a4b-8912-3904c9258e79", 00:19:47.689 "is_configured": true, 00:19:47.689 "data_offset": 2048, 00:19:47.689 "data_size": 63488 00:19:47.689 }, 00:19:47.689 { 00:19:47.689 "name": "BaseBdev3", 00:19:47.689 "uuid": "c84d4388-6d5e-597e-bd6b-f9009998a1ac", 00:19:47.689 "is_configured": true, 00:19:47.689 "data_offset": 2048, 00:19:47.689 "data_size": 63488 00:19:47.689 }, 00:19:47.689 { 00:19:47.689 "name": "BaseBdev4", 00:19:47.689 "uuid": "a94b6492-fd42-5f05-a428-94faf6901bb8", 00:19:47.689 "is_configured": true, 00:19:47.689 "data_offset": 2048, 00:19:47.689 "data_size": 63488 00:19:47.689 } 00:19:47.689 ] 00:19:47.689 }' 00:19:47.689 13:39:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:47.689 13:39:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:47.946 135.00 IOPS, 405.00 MiB/s [2024-11-20T13:39:47.431Z] 13:39:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:47.946 13:39:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:47.946 13:39:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:47.946 13:39:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:47.946 13:39:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:47.946 13:39:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:47.946 13:39:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:47.946 13:39:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.946 13:39:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:47.946 13:39:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.203 13:39:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:48.203 "name": "raid_bdev1", 00:19:48.203 "uuid": "6832da1e-ecc2-46a8-bda4-fb92ecf1e510", 00:19:48.203 "strip_size_kb": 0, 00:19:48.203 "state": "online", 00:19:48.203 "raid_level": "raid1", 00:19:48.203 "superblock": true, 00:19:48.203 "num_base_bdevs": 4, 00:19:48.203 "num_base_bdevs_discovered": 3, 00:19:48.203 "num_base_bdevs_operational": 3, 00:19:48.203 "base_bdevs_list": [ 00:19:48.203 { 00:19:48.203 "name": null, 00:19:48.203 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:48.203 "is_configured": false, 00:19:48.203 "data_offset": 0, 00:19:48.203 "data_size": 63488 00:19:48.203 }, 00:19:48.203 { 00:19:48.203 "name": "BaseBdev2", 00:19:48.203 "uuid": "172b6f95-680d-5a4b-8912-3904c9258e79", 00:19:48.203 "is_configured": true, 00:19:48.203 "data_offset": 2048, 00:19:48.203 "data_size": 63488 00:19:48.203 }, 00:19:48.203 { 00:19:48.203 "name": "BaseBdev3", 00:19:48.204 "uuid": "c84d4388-6d5e-597e-bd6b-f9009998a1ac", 00:19:48.204 "is_configured": true, 00:19:48.204 "data_offset": 2048, 00:19:48.204 "data_size": 63488 00:19:48.204 }, 00:19:48.204 { 00:19:48.204 "name": "BaseBdev4", 00:19:48.204 "uuid": "a94b6492-fd42-5f05-a428-94faf6901bb8", 00:19:48.204 "is_configured": true, 00:19:48.204 "data_offset": 2048, 00:19:48.204 "data_size": 63488 00:19:48.204 } 00:19:48.204 ] 00:19:48.204 }' 00:19:48.204 13:39:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:48.204 13:39:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:48.204 13:39:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:48.204 13:39:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:48.204 13:39:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:48.204 13:39:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.204 13:39:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:48.204 [2024-11-20 13:39:47.515699] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:48.204 13:39:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.204 13:39:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:19:48.204 [2024-11-20 13:39:47.585233] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:19:48.204 [2024-11-20 13:39:47.587545] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:48.461 [2024-11-20 13:39:47.697637] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:19:48.461 [2024-11-20 13:39:47.698028] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:19:48.461 [2024-11-20 13:39:47.809319] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:19:48.461 [2024-11-20 13:39:47.810038] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:19:49.026 [2024-11-20 13:39:48.297400] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:19:49.285 134.67 IOPS, 404.00 MiB/s [2024-11-20T13:39:48.770Z] 13:39:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:49.285 13:39:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:49.285 13:39:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:49.285 13:39:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:49.285 13:39:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:49.285 13:39:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:49.285 13:39:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.285 13:39:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:49.285 13:39:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:49.285 13:39:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.285 13:39:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:49.285 "name": "raid_bdev1", 00:19:49.285 "uuid": "6832da1e-ecc2-46a8-bda4-fb92ecf1e510", 00:19:49.285 "strip_size_kb": 0, 00:19:49.285 "state": "online", 00:19:49.285 "raid_level": "raid1", 00:19:49.285 "superblock": true, 00:19:49.285 "num_base_bdevs": 4, 00:19:49.285 "num_base_bdevs_discovered": 4, 00:19:49.285 "num_base_bdevs_operational": 4, 00:19:49.285 "process": { 00:19:49.285 "type": "rebuild", 00:19:49.285 "target": "spare", 00:19:49.285 "progress": { 00:19:49.285 "blocks": 12288, 00:19:49.285 "percent": 19 00:19:49.285 } 00:19:49.285 }, 00:19:49.285 "base_bdevs_list": [ 00:19:49.285 { 00:19:49.285 "name": "spare", 00:19:49.285 "uuid": "91011f87-369e-5f97-92f1-3c4e6c2eb5a2", 00:19:49.285 "is_configured": true, 00:19:49.285 "data_offset": 2048, 00:19:49.285 "data_size": 63488 00:19:49.285 }, 00:19:49.285 { 00:19:49.285 "name": "BaseBdev2", 00:19:49.285 "uuid": "172b6f95-680d-5a4b-8912-3904c9258e79", 00:19:49.285 "is_configured": true, 00:19:49.285 "data_offset": 2048, 00:19:49.285 "data_size": 63488 00:19:49.285 }, 00:19:49.285 { 00:19:49.285 "name": "BaseBdev3", 00:19:49.285 "uuid": "c84d4388-6d5e-597e-bd6b-f9009998a1ac", 00:19:49.285 "is_configured": true, 00:19:49.285 "data_offset": 2048, 00:19:49.285 "data_size": 63488 00:19:49.285 }, 00:19:49.285 { 00:19:49.285 "name": "BaseBdev4", 00:19:49.285 "uuid": "a94b6492-fd42-5f05-a428-94faf6901bb8", 00:19:49.285 "is_configured": true, 00:19:49.285 "data_offset": 2048, 00:19:49.285 "data_size": 63488 00:19:49.285 } 00:19:49.285 ] 00:19:49.286 }' 00:19:49.286 13:39:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:49.286 13:39:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:49.286 13:39:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:49.286 13:39:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:49.286 13:39:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:19:49.286 13:39:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:19:49.286 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:19:49.286 13:39:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:19:49.286 13:39:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:19:49.286 13:39:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:19:49.286 13:39:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:19:49.286 13:39:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.286 13:39:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:49.286 [2024-11-20 13:39:48.701510] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:19:49.286 [2024-11-20 13:39:48.753323] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:19:49.545 [2024-11-20 13:39:48.952241] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006220 00:19:49.545 [2024-11-20 13:39:48.952303] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d0000063c0 00:19:49.545 [2024-11-20 13:39:48.953845] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:19:49.545 [2024-11-20 13:39:48.954878] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:19:49.545 13:39:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.545 13:39:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:19:49.545 13:39:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:19:49.545 13:39:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:49.545 13:39:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:49.545 13:39:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:49.545 13:39:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:49.545 13:39:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:49.545 13:39:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:49.545 13:39:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.545 13:39:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:49.545 13:39:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:49.545 13:39:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.545 13:39:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:49.545 "name": "raid_bdev1", 00:19:49.546 "uuid": "6832da1e-ecc2-46a8-bda4-fb92ecf1e510", 00:19:49.546 "strip_size_kb": 0, 00:19:49.546 "state": "online", 00:19:49.546 "raid_level": "raid1", 00:19:49.546 "superblock": true, 00:19:49.546 "num_base_bdevs": 4, 00:19:49.546 "num_base_bdevs_discovered": 3, 00:19:49.546 "num_base_bdevs_operational": 3, 00:19:49.546 "process": { 00:19:49.546 "type": "rebuild", 00:19:49.546 "target": "spare", 00:19:49.546 "progress": { 00:19:49.546 "blocks": 16384, 00:19:49.546 "percent": 25 00:19:49.546 } 00:19:49.546 }, 00:19:49.546 "base_bdevs_list": [ 00:19:49.546 { 00:19:49.546 "name": "spare", 00:19:49.546 "uuid": "91011f87-369e-5f97-92f1-3c4e6c2eb5a2", 00:19:49.546 "is_configured": true, 00:19:49.546 "data_offset": 2048, 00:19:49.546 "data_size": 63488 00:19:49.546 }, 00:19:49.546 { 00:19:49.546 "name": null, 00:19:49.546 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:49.546 "is_configured": false, 00:19:49.546 "data_offset": 0, 00:19:49.546 "data_size": 63488 00:19:49.546 }, 00:19:49.546 { 00:19:49.546 "name": "BaseBdev3", 00:19:49.546 "uuid": "c84d4388-6d5e-597e-bd6b-f9009998a1ac", 00:19:49.546 "is_configured": true, 00:19:49.546 "data_offset": 2048, 00:19:49.546 "data_size": 63488 00:19:49.546 }, 00:19:49.546 { 00:19:49.546 "name": "BaseBdev4", 00:19:49.546 "uuid": "a94b6492-fd42-5f05-a428-94faf6901bb8", 00:19:49.546 "is_configured": true, 00:19:49.546 "data_offset": 2048, 00:19:49.546 "data_size": 63488 00:19:49.546 } 00:19:49.546 ] 00:19:49.546 }' 00:19:49.546 13:39:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:49.804 13:39:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:49.804 13:39:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:49.804 13:39:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:49.804 13:39:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=497 00:19:49.804 13:39:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:49.804 13:39:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:49.804 13:39:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:49.804 13:39:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:49.804 13:39:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:49.804 13:39:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:49.804 13:39:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:49.804 13:39:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:49.804 13:39:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.804 13:39:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:49.804 13:39:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.804 13:39:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:49.804 "name": "raid_bdev1", 00:19:49.804 "uuid": "6832da1e-ecc2-46a8-bda4-fb92ecf1e510", 00:19:49.804 "strip_size_kb": 0, 00:19:49.804 "state": "online", 00:19:49.804 "raid_level": "raid1", 00:19:49.804 "superblock": true, 00:19:49.804 "num_base_bdevs": 4, 00:19:49.804 "num_base_bdevs_discovered": 3, 00:19:49.804 "num_base_bdevs_operational": 3, 00:19:49.804 "process": { 00:19:49.804 "type": "rebuild", 00:19:49.804 "target": "spare", 00:19:49.804 "progress": { 00:19:49.804 "blocks": 18432, 00:19:49.804 "percent": 29 00:19:49.804 } 00:19:49.804 }, 00:19:49.804 "base_bdevs_list": [ 00:19:49.804 { 00:19:49.804 "name": "spare", 00:19:49.804 "uuid": "91011f87-369e-5f97-92f1-3c4e6c2eb5a2", 00:19:49.804 "is_configured": true, 00:19:49.804 "data_offset": 2048, 00:19:49.804 "data_size": 63488 00:19:49.804 }, 00:19:49.804 { 00:19:49.805 "name": null, 00:19:49.805 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:49.805 "is_configured": false, 00:19:49.805 "data_offset": 0, 00:19:49.805 "data_size": 63488 00:19:49.805 }, 00:19:49.805 { 00:19:49.805 "name": "BaseBdev3", 00:19:49.805 "uuid": "c84d4388-6d5e-597e-bd6b-f9009998a1ac", 00:19:49.805 "is_configured": true, 00:19:49.805 "data_offset": 2048, 00:19:49.805 "data_size": 63488 00:19:49.805 }, 00:19:49.805 { 00:19:49.805 "name": "BaseBdev4", 00:19:49.805 "uuid": "a94b6492-fd42-5f05-a428-94faf6901bb8", 00:19:49.805 "is_configured": true, 00:19:49.805 "data_offset": 2048, 00:19:49.805 "data_size": 63488 00:19:49.805 } 00:19:49.805 ] 00:19:49.805 }' 00:19:49.805 13:39:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:49.805 13:39:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:49.805 13:39:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:49.805 [2024-11-20 13:39:49.207691] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:19:49.805 13:39:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:49.805 13:39:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:50.630 113.00 IOPS, 339.00 MiB/s [2024-11-20T13:39:50.115Z] [2024-11-20 13:39:49.953818] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:19:50.630 [2024-11-20 13:39:50.085246] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:19:50.941 13:39:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:50.942 13:39:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:50.942 13:39:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:50.942 13:39:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:50.942 13:39:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:50.942 13:39:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:50.942 13:39:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:50.942 13:39:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.942 13:39:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:50.942 13:39:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:50.942 13:39:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.942 13:39:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:50.942 "name": "raid_bdev1", 00:19:50.942 "uuid": "6832da1e-ecc2-46a8-bda4-fb92ecf1e510", 00:19:50.942 "strip_size_kb": 0, 00:19:50.942 "state": "online", 00:19:50.942 "raid_level": "raid1", 00:19:50.942 "superblock": true, 00:19:50.942 "num_base_bdevs": 4, 00:19:50.942 "num_base_bdevs_discovered": 3, 00:19:50.942 "num_base_bdevs_operational": 3, 00:19:50.942 "process": { 00:19:50.942 "type": "rebuild", 00:19:50.942 "target": "spare", 00:19:50.942 "progress": { 00:19:50.942 "blocks": 36864, 00:19:50.942 "percent": 58 00:19:50.942 } 00:19:50.942 }, 00:19:50.942 "base_bdevs_list": [ 00:19:50.942 { 00:19:50.942 "name": "spare", 00:19:50.942 "uuid": "91011f87-369e-5f97-92f1-3c4e6c2eb5a2", 00:19:50.942 "is_configured": true, 00:19:50.942 "data_offset": 2048, 00:19:50.942 "data_size": 63488 00:19:50.942 }, 00:19:50.942 { 00:19:50.942 "name": null, 00:19:50.942 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:50.942 "is_configured": false, 00:19:50.942 "data_offset": 0, 00:19:50.942 "data_size": 63488 00:19:50.942 }, 00:19:50.942 { 00:19:50.942 "name": "BaseBdev3", 00:19:50.942 "uuid": "c84d4388-6d5e-597e-bd6b-f9009998a1ac", 00:19:50.942 "is_configured": true, 00:19:50.942 "data_offset": 2048, 00:19:50.942 "data_size": 63488 00:19:50.942 }, 00:19:50.942 { 00:19:50.942 "name": "BaseBdev4", 00:19:50.942 "uuid": "a94b6492-fd42-5f05-a428-94faf6901bb8", 00:19:50.942 "is_configured": true, 00:19:50.942 "data_offset": 2048, 00:19:50.942 "data_size": 63488 00:19:50.942 } 00:19:50.942 ] 00:19:50.942 }' 00:19:50.942 13:39:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:50.942 103.40 IOPS, 310.20 MiB/s [2024-11-20T13:39:50.427Z] 13:39:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:50.942 13:39:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:50.942 13:39:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:50.942 13:39:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:51.238 [2024-11-20 13:39:50.670201] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:19:51.497 [2024-11-20 13:39:50.785838] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:19:51.755 [2024-11-20 13:39:51.119510] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:19:52.015 93.17 IOPS, 279.50 MiB/s [2024-11-20T13:39:51.500Z] [2024-11-20 13:39:51.342913] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:19:52.015 13:39:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:52.015 13:39:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:52.015 13:39:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:52.015 13:39:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:52.015 13:39:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:52.015 13:39:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:52.015 13:39:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:52.015 13:39:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:52.015 13:39:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.015 13:39:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:52.015 13:39:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.015 13:39:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:52.015 "name": "raid_bdev1", 00:19:52.015 "uuid": "6832da1e-ecc2-46a8-bda4-fb92ecf1e510", 00:19:52.015 "strip_size_kb": 0, 00:19:52.015 "state": "online", 00:19:52.015 "raid_level": "raid1", 00:19:52.015 "superblock": true, 00:19:52.015 "num_base_bdevs": 4, 00:19:52.015 "num_base_bdevs_discovered": 3, 00:19:52.015 "num_base_bdevs_operational": 3, 00:19:52.015 "process": { 00:19:52.015 "type": "rebuild", 00:19:52.015 "target": "spare", 00:19:52.015 "progress": { 00:19:52.015 "blocks": 53248, 00:19:52.015 "percent": 83 00:19:52.015 } 00:19:52.015 }, 00:19:52.015 "base_bdevs_list": [ 00:19:52.015 { 00:19:52.015 "name": "spare", 00:19:52.015 "uuid": "91011f87-369e-5f97-92f1-3c4e6c2eb5a2", 00:19:52.015 "is_configured": true, 00:19:52.015 "data_offset": 2048, 00:19:52.015 "data_size": 63488 00:19:52.015 }, 00:19:52.015 { 00:19:52.015 "name": null, 00:19:52.015 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:52.015 "is_configured": false, 00:19:52.015 "data_offset": 0, 00:19:52.015 "data_size": 63488 00:19:52.015 }, 00:19:52.015 { 00:19:52.015 "name": "BaseBdev3", 00:19:52.015 "uuid": "c84d4388-6d5e-597e-bd6b-f9009998a1ac", 00:19:52.015 "is_configured": true, 00:19:52.015 "data_offset": 2048, 00:19:52.015 "data_size": 63488 00:19:52.015 }, 00:19:52.015 { 00:19:52.015 "name": "BaseBdev4", 00:19:52.015 "uuid": "a94b6492-fd42-5f05-a428-94faf6901bb8", 00:19:52.015 "is_configured": true, 00:19:52.015 "data_offset": 2048, 00:19:52.015 "data_size": 63488 00:19:52.015 } 00:19:52.015 ] 00:19:52.015 }' 00:19:52.015 13:39:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:52.015 13:39:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:52.015 13:39:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:52.273 13:39:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:52.273 13:39:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:52.531 [2024-11-20 13:39:52.002365] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:19:52.790 [2024-11-20 13:39:52.102310] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:19:52.790 [2024-11-20 13:39:52.104152] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:53.049 85.14 IOPS, 255.43 MiB/s [2024-11-20T13:39:52.534Z] 13:39:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:53.049 13:39:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:53.049 13:39:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:53.049 13:39:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:53.308 13:39:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:53.308 13:39:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:53.308 13:39:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:53.308 13:39:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:53.308 13:39:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.309 13:39:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:53.309 13:39:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.309 13:39:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:53.309 "name": "raid_bdev1", 00:19:53.309 "uuid": "6832da1e-ecc2-46a8-bda4-fb92ecf1e510", 00:19:53.309 "strip_size_kb": 0, 00:19:53.309 "state": "online", 00:19:53.309 "raid_level": "raid1", 00:19:53.309 "superblock": true, 00:19:53.309 "num_base_bdevs": 4, 00:19:53.309 "num_base_bdevs_discovered": 3, 00:19:53.309 "num_base_bdevs_operational": 3, 00:19:53.309 "base_bdevs_list": [ 00:19:53.309 { 00:19:53.309 "name": "spare", 00:19:53.309 "uuid": "91011f87-369e-5f97-92f1-3c4e6c2eb5a2", 00:19:53.309 "is_configured": true, 00:19:53.309 "data_offset": 2048, 00:19:53.309 "data_size": 63488 00:19:53.309 }, 00:19:53.309 { 00:19:53.309 "name": null, 00:19:53.309 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:53.309 "is_configured": false, 00:19:53.309 "data_offset": 0, 00:19:53.309 "data_size": 63488 00:19:53.309 }, 00:19:53.309 { 00:19:53.309 "name": "BaseBdev3", 00:19:53.309 "uuid": "c84d4388-6d5e-597e-bd6b-f9009998a1ac", 00:19:53.309 "is_configured": true, 00:19:53.309 "data_offset": 2048, 00:19:53.309 "data_size": 63488 00:19:53.309 }, 00:19:53.309 { 00:19:53.309 "name": "BaseBdev4", 00:19:53.309 "uuid": "a94b6492-fd42-5f05-a428-94faf6901bb8", 00:19:53.309 "is_configured": true, 00:19:53.309 "data_offset": 2048, 00:19:53.309 "data_size": 63488 00:19:53.309 } 00:19:53.309 ] 00:19:53.309 }' 00:19:53.309 13:39:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:53.309 13:39:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:19:53.309 13:39:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:53.309 13:39:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:19:53.309 13:39:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 00:19:53.309 13:39:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:53.309 13:39:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:53.309 13:39:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:53.309 13:39:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:53.309 13:39:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:53.309 13:39:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:53.309 13:39:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:53.309 13:39:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.309 13:39:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:53.309 13:39:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.309 13:39:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:53.309 "name": "raid_bdev1", 00:19:53.309 "uuid": "6832da1e-ecc2-46a8-bda4-fb92ecf1e510", 00:19:53.309 "strip_size_kb": 0, 00:19:53.309 "state": "online", 00:19:53.309 "raid_level": "raid1", 00:19:53.309 "superblock": true, 00:19:53.309 "num_base_bdevs": 4, 00:19:53.309 "num_base_bdevs_discovered": 3, 00:19:53.309 "num_base_bdevs_operational": 3, 00:19:53.309 "base_bdevs_list": [ 00:19:53.309 { 00:19:53.309 "name": "spare", 00:19:53.309 "uuid": "91011f87-369e-5f97-92f1-3c4e6c2eb5a2", 00:19:53.309 "is_configured": true, 00:19:53.309 "data_offset": 2048, 00:19:53.309 "data_size": 63488 00:19:53.309 }, 00:19:53.309 { 00:19:53.309 "name": null, 00:19:53.309 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:53.309 "is_configured": false, 00:19:53.309 "data_offset": 0, 00:19:53.309 "data_size": 63488 00:19:53.309 }, 00:19:53.309 { 00:19:53.309 "name": "BaseBdev3", 00:19:53.309 "uuid": "c84d4388-6d5e-597e-bd6b-f9009998a1ac", 00:19:53.309 "is_configured": true, 00:19:53.309 "data_offset": 2048, 00:19:53.309 "data_size": 63488 00:19:53.309 }, 00:19:53.309 { 00:19:53.309 "name": "BaseBdev4", 00:19:53.309 "uuid": "a94b6492-fd42-5f05-a428-94faf6901bb8", 00:19:53.309 "is_configured": true, 00:19:53.309 "data_offset": 2048, 00:19:53.309 "data_size": 63488 00:19:53.309 } 00:19:53.309 ] 00:19:53.309 }' 00:19:53.309 13:39:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:53.309 13:39:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:53.309 13:39:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:53.309 13:39:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:53.309 13:39:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:19:53.309 13:39:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:53.309 13:39:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:53.309 13:39:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:53.309 13:39:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:53.309 13:39:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:53.309 13:39:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:53.309 13:39:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:53.309 13:39:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:53.309 13:39:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:53.568 13:39:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:53.568 13:39:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:53.568 13:39:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.568 13:39:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:53.568 13:39:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.568 13:39:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:53.568 "name": "raid_bdev1", 00:19:53.568 "uuid": "6832da1e-ecc2-46a8-bda4-fb92ecf1e510", 00:19:53.568 "strip_size_kb": 0, 00:19:53.568 "state": "online", 00:19:53.568 "raid_level": "raid1", 00:19:53.568 "superblock": true, 00:19:53.568 "num_base_bdevs": 4, 00:19:53.568 "num_base_bdevs_discovered": 3, 00:19:53.568 "num_base_bdevs_operational": 3, 00:19:53.568 "base_bdevs_list": [ 00:19:53.568 { 00:19:53.568 "name": "spare", 00:19:53.568 "uuid": "91011f87-369e-5f97-92f1-3c4e6c2eb5a2", 00:19:53.568 "is_configured": true, 00:19:53.568 "data_offset": 2048, 00:19:53.568 "data_size": 63488 00:19:53.568 }, 00:19:53.568 { 00:19:53.568 "name": null, 00:19:53.568 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:53.568 "is_configured": false, 00:19:53.568 "data_offset": 0, 00:19:53.568 "data_size": 63488 00:19:53.568 }, 00:19:53.568 { 00:19:53.568 "name": "BaseBdev3", 00:19:53.568 "uuid": "c84d4388-6d5e-597e-bd6b-f9009998a1ac", 00:19:53.568 "is_configured": true, 00:19:53.568 "data_offset": 2048, 00:19:53.568 "data_size": 63488 00:19:53.568 }, 00:19:53.568 { 00:19:53.568 "name": "BaseBdev4", 00:19:53.568 "uuid": "a94b6492-fd42-5f05-a428-94faf6901bb8", 00:19:53.568 "is_configured": true, 00:19:53.568 "data_offset": 2048, 00:19:53.568 "data_size": 63488 00:19:53.568 } 00:19:53.568 ] 00:19:53.568 }' 00:19:53.568 13:39:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:53.568 13:39:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:53.827 13:39:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:53.827 13:39:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.827 13:39:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:53.827 [2024-11-20 13:39:53.288299] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:53.827 [2024-11-20 13:39:53.288338] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:54.086 00:19:54.086 Latency(us) 00:19:54.086 [2024-11-20T13:39:53.571Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:54.086 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:19:54.086 raid_bdev1 : 8.01 79.86 239.59 0.00 0.00 17202.22 322.42 112858.78 00:19:54.086 [2024-11-20T13:39:53.571Z] =================================================================================================================== 00:19:54.086 [2024-11-20T13:39:53.571Z] Total : 79.86 239.59 0.00 0.00 17202.22 322.42 112858.78 00:19:54.086 [2024-11-20 13:39:53.347519] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:54.086 [2024-11-20 13:39:53.347600] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:54.086 [2024-11-20 13:39:53.347700] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:54.086 [2024-11-20 13:39:53.347717] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:19:54.086 { 00:19:54.086 "results": [ 00:19:54.086 { 00:19:54.086 "job": "raid_bdev1", 00:19:54.086 "core_mask": "0x1", 00:19:54.086 "workload": "randrw", 00:19:54.086 "percentage": 50, 00:19:54.086 "status": "finished", 00:19:54.086 "queue_depth": 2, 00:19:54.086 "io_size": 3145728, 00:19:54.086 "runtime": 8.013627, 00:19:54.086 "iops": 79.86396172419805, 00:19:54.086 "mibps": 239.59188517259415, 00:19:54.086 "io_failed": 0, 00:19:54.086 "io_timeout": 0, 00:19:54.086 "avg_latency_us": 17202.224578313253, 00:19:54.086 "min_latency_us": 322.4160642570281, 00:19:54.086 "max_latency_us": 112858.78232931727 00:19:54.086 } 00:19:54.086 ], 00:19:54.086 "core_count": 1 00:19:54.086 } 00:19:54.086 13:39:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.086 13:39:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:54.086 13:39:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.086 13:39:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:54.086 13:39:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 00:19:54.086 13:39:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.086 13:39:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:19:54.086 13:39:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:19:54.086 13:39:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:19:54.086 13:39:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:19:54.086 13:39:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:19:54.086 13:39:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:19:54.086 13:39:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:54.086 13:39:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:19:54.086 13:39:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:54.086 13:39:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:19:54.086 13:39:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:54.086 13:39:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:54.086 13:39:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:19:54.345 /dev/nbd0 00:19:54.345 13:39:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:54.345 13:39:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:54.345 13:39:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:19:54.345 13:39:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:19:54.345 13:39:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:54.345 13:39:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:54.345 13:39:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:19:54.345 13:39:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:19:54.345 13:39:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:54.345 13:39:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:54.345 13:39:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:54.345 1+0 records in 00:19:54.345 1+0 records out 00:19:54.345 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000460898 s, 8.9 MB/s 00:19:54.345 13:39:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:54.345 13:39:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:19:54.345 13:39:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:54.345 13:39:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:54.345 13:39:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:19:54.345 13:39:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:54.345 13:39:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:54.345 13:39:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:19:54.345 13:39:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 00:19:54.345 13:39:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@728 -- # continue 00:19:54.345 13:39:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:19:54.345 13:39:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 00:19:54.345 13:39:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 00:19:54.345 13:39:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:19:54.345 13:39:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:19:54.345 13:39:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:54.345 13:39:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:19:54.345 13:39:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:54.345 13:39:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:19:54.345 13:39:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:54.345 13:39:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:54.345 13:39:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:19:54.604 /dev/nbd1 00:19:54.604 13:39:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:19:54.604 13:39:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:19:54.604 13:39:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:19:54.604 13:39:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:19:54.604 13:39:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:54.604 13:39:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:54.604 13:39:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:19:54.604 13:39:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:19:54.604 13:39:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:54.604 13:39:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:54.604 13:39:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:54.604 1+0 records in 00:19:54.604 1+0 records out 00:19:54.604 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000411675 s, 9.9 MB/s 00:19:54.604 13:39:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:54.604 13:39:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:19:54.604 13:39:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:54.604 13:39:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:54.604 13:39:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:19:54.604 13:39:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:54.604 13:39:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:54.604 13:39:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:19:54.863 13:39:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:19:54.863 13:39:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:19:54.863 13:39:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:19:54.863 13:39:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:54.863 13:39:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:19:54.863 13:39:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:54.863 13:39:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:19:55.121 13:39:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:19:55.121 13:39:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:19:55.121 13:39:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:19:55.121 13:39:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:55.121 13:39:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:55.121 13:39:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:19:55.121 13:39:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:19:55.121 13:39:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:19:55.121 13:39:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:19:55.121 13:39:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 00:19:55.121 13:39:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 00:19:55.121 13:39:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:19:55.121 13:39:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:19:55.121 13:39:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:55.121 13:39:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:19:55.121 13:39:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:55.121 13:39:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:19:55.121 13:39:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:55.121 13:39:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:55.121 13:39:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:19:55.379 /dev/nbd1 00:19:55.379 13:39:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:19:55.379 13:39:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:19:55.379 13:39:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:19:55.379 13:39:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:19:55.379 13:39:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:55.379 13:39:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:55.379 13:39:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:19:55.379 13:39:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:19:55.379 13:39:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:55.379 13:39:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:55.379 13:39:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:55.379 1+0 records in 00:19:55.379 1+0 records out 00:19:55.379 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000434859 s, 9.4 MB/s 00:19:55.379 13:39:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:55.379 13:39:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:19:55.379 13:39:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:55.379 13:39:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:55.379 13:39:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:19:55.379 13:39:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:55.379 13:39:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:55.379 13:39:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:19:55.379 13:39:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:19:55.379 13:39:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:19:55.379 13:39:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:19:55.379 13:39:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:55.379 13:39:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:19:55.379 13:39:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:55.379 13:39:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:19:55.643 13:39:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:19:55.643 13:39:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:19:55.643 13:39:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:19:55.643 13:39:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:55.643 13:39:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:55.643 13:39:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:19:55.643 13:39:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:19:55.643 13:39:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:19:55.643 13:39:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:19:55.643 13:39:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:19:55.643 13:39:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:19:55.643 13:39:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:55.643 13:39:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:19:55.644 13:39:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:55.644 13:39:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:19:55.926 13:39:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:55.926 13:39:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:55.926 13:39:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:55.926 13:39:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:55.926 13:39:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:55.926 13:39:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:55.926 13:39:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:19:55.926 13:39:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:19:55.926 13:39:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:19:55.926 13:39:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:19:55.926 13:39:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.926 13:39:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:55.926 13:39:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.926 13:39:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:19:55.926 13:39:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.926 13:39:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:55.926 [2024-11-20 13:39:55.309579] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:55.926 [2024-11-20 13:39:55.309640] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:55.926 [2024-11-20 13:39:55.309663] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:19:55.926 [2024-11-20 13:39:55.309678] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:55.926 [2024-11-20 13:39:55.312329] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:55.926 [2024-11-20 13:39:55.312374] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:55.926 [2024-11-20 13:39:55.312472] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:19:55.926 [2024-11-20 13:39:55.312531] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:55.926 [2024-11-20 13:39:55.312660] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:55.926 [2024-11-20 13:39:55.312754] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:19:55.926 spare 00:19:55.926 13:39:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.926 13:39:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:19:55.926 13:39:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.926 13:39:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:56.185 [2024-11-20 13:39:55.412678] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:19:56.185 [2024-11-20 13:39:55.412922] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:19:56.185 [2024-11-20 13:39:55.413384] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037160 00:19:56.185 [2024-11-20 13:39:55.413629] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:19:56.185 [2024-11-20 13:39:55.413641] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:19:56.185 [2024-11-20 13:39:55.413867] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:56.185 13:39:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.185 13:39:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:19:56.185 13:39:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:56.185 13:39:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:56.185 13:39:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:56.185 13:39:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:56.185 13:39:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:56.185 13:39:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:56.185 13:39:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:56.185 13:39:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:56.185 13:39:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:56.185 13:39:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:56.185 13:39:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:56.185 13:39:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.185 13:39:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:56.185 13:39:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.185 13:39:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:56.185 "name": "raid_bdev1", 00:19:56.185 "uuid": "6832da1e-ecc2-46a8-bda4-fb92ecf1e510", 00:19:56.185 "strip_size_kb": 0, 00:19:56.185 "state": "online", 00:19:56.185 "raid_level": "raid1", 00:19:56.185 "superblock": true, 00:19:56.185 "num_base_bdevs": 4, 00:19:56.185 "num_base_bdevs_discovered": 3, 00:19:56.185 "num_base_bdevs_operational": 3, 00:19:56.185 "base_bdevs_list": [ 00:19:56.185 { 00:19:56.185 "name": "spare", 00:19:56.185 "uuid": "91011f87-369e-5f97-92f1-3c4e6c2eb5a2", 00:19:56.185 "is_configured": true, 00:19:56.185 "data_offset": 2048, 00:19:56.185 "data_size": 63488 00:19:56.185 }, 00:19:56.185 { 00:19:56.185 "name": null, 00:19:56.185 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:56.185 "is_configured": false, 00:19:56.185 "data_offset": 2048, 00:19:56.185 "data_size": 63488 00:19:56.185 }, 00:19:56.185 { 00:19:56.185 "name": "BaseBdev3", 00:19:56.185 "uuid": "c84d4388-6d5e-597e-bd6b-f9009998a1ac", 00:19:56.185 "is_configured": true, 00:19:56.185 "data_offset": 2048, 00:19:56.185 "data_size": 63488 00:19:56.185 }, 00:19:56.185 { 00:19:56.185 "name": "BaseBdev4", 00:19:56.185 "uuid": "a94b6492-fd42-5f05-a428-94faf6901bb8", 00:19:56.185 "is_configured": true, 00:19:56.185 "data_offset": 2048, 00:19:56.185 "data_size": 63488 00:19:56.185 } 00:19:56.185 ] 00:19:56.185 }' 00:19:56.185 13:39:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:56.185 13:39:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:56.443 13:39:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:56.443 13:39:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:56.443 13:39:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:56.443 13:39:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:56.443 13:39:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:56.443 13:39:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:56.443 13:39:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:56.443 13:39:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.443 13:39:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:56.443 13:39:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.444 13:39:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:56.444 "name": "raid_bdev1", 00:19:56.444 "uuid": "6832da1e-ecc2-46a8-bda4-fb92ecf1e510", 00:19:56.444 "strip_size_kb": 0, 00:19:56.444 "state": "online", 00:19:56.444 "raid_level": "raid1", 00:19:56.444 "superblock": true, 00:19:56.444 "num_base_bdevs": 4, 00:19:56.444 "num_base_bdevs_discovered": 3, 00:19:56.444 "num_base_bdevs_operational": 3, 00:19:56.444 "base_bdevs_list": [ 00:19:56.444 { 00:19:56.444 "name": "spare", 00:19:56.444 "uuid": "91011f87-369e-5f97-92f1-3c4e6c2eb5a2", 00:19:56.444 "is_configured": true, 00:19:56.444 "data_offset": 2048, 00:19:56.444 "data_size": 63488 00:19:56.444 }, 00:19:56.444 { 00:19:56.444 "name": null, 00:19:56.444 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:56.444 "is_configured": false, 00:19:56.444 "data_offset": 2048, 00:19:56.444 "data_size": 63488 00:19:56.444 }, 00:19:56.444 { 00:19:56.444 "name": "BaseBdev3", 00:19:56.444 "uuid": "c84d4388-6d5e-597e-bd6b-f9009998a1ac", 00:19:56.444 "is_configured": true, 00:19:56.444 "data_offset": 2048, 00:19:56.444 "data_size": 63488 00:19:56.444 }, 00:19:56.444 { 00:19:56.444 "name": "BaseBdev4", 00:19:56.444 "uuid": "a94b6492-fd42-5f05-a428-94faf6901bb8", 00:19:56.444 "is_configured": true, 00:19:56.444 "data_offset": 2048, 00:19:56.444 "data_size": 63488 00:19:56.444 } 00:19:56.444 ] 00:19:56.444 }' 00:19:56.444 13:39:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:56.701 13:39:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:56.701 13:39:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:56.701 13:39:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:56.701 13:39:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:56.701 13:39:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.701 13:39:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:19:56.701 13:39:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:56.701 13:39:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.701 13:39:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:19:56.701 13:39:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:19:56.701 13:39:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.701 13:39:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:56.701 [2024-11-20 13:39:56.061015] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:56.701 13:39:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.701 13:39:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:56.701 13:39:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:56.702 13:39:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:56.702 13:39:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:56.702 13:39:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:56.702 13:39:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:56.702 13:39:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:56.702 13:39:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:56.702 13:39:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:56.702 13:39:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:56.702 13:39:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:56.702 13:39:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.702 13:39:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:56.702 13:39:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:56.702 13:39:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.702 13:39:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:56.702 "name": "raid_bdev1", 00:19:56.702 "uuid": "6832da1e-ecc2-46a8-bda4-fb92ecf1e510", 00:19:56.702 "strip_size_kb": 0, 00:19:56.702 "state": "online", 00:19:56.702 "raid_level": "raid1", 00:19:56.702 "superblock": true, 00:19:56.702 "num_base_bdevs": 4, 00:19:56.702 "num_base_bdevs_discovered": 2, 00:19:56.702 "num_base_bdevs_operational": 2, 00:19:56.702 "base_bdevs_list": [ 00:19:56.702 { 00:19:56.702 "name": null, 00:19:56.702 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:56.702 "is_configured": false, 00:19:56.702 "data_offset": 0, 00:19:56.702 "data_size": 63488 00:19:56.702 }, 00:19:56.702 { 00:19:56.702 "name": null, 00:19:56.702 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:56.702 "is_configured": false, 00:19:56.702 "data_offset": 2048, 00:19:56.702 "data_size": 63488 00:19:56.702 }, 00:19:56.702 { 00:19:56.702 "name": "BaseBdev3", 00:19:56.702 "uuid": "c84d4388-6d5e-597e-bd6b-f9009998a1ac", 00:19:56.702 "is_configured": true, 00:19:56.702 "data_offset": 2048, 00:19:56.702 "data_size": 63488 00:19:56.702 }, 00:19:56.702 { 00:19:56.702 "name": "BaseBdev4", 00:19:56.702 "uuid": "a94b6492-fd42-5f05-a428-94faf6901bb8", 00:19:56.702 "is_configured": true, 00:19:56.702 "data_offset": 2048, 00:19:56.702 "data_size": 63488 00:19:56.702 } 00:19:56.702 ] 00:19:56.702 }' 00:19:56.702 13:39:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:56.702 13:39:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:57.269 13:39:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:57.269 13:39:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.269 13:39:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:57.269 [2024-11-20 13:39:56.488494] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:57.269 [2024-11-20 13:39:56.488822] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:19:57.269 [2024-11-20 13:39:56.488962] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:19:57.269 [2024-11-20 13:39:56.489015] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:57.269 [2024-11-20 13:39:56.503810] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037230 00:19:57.269 13:39:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.269 13:39:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 00:19:57.269 [2024-11-20 13:39:56.505910] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:58.205 13:39:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:58.205 13:39:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:58.205 13:39:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:58.205 13:39:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:58.205 13:39:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:58.205 13:39:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:58.205 13:39:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:58.205 13:39:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.205 13:39:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:58.205 13:39:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.205 13:39:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:58.205 "name": "raid_bdev1", 00:19:58.205 "uuid": "6832da1e-ecc2-46a8-bda4-fb92ecf1e510", 00:19:58.205 "strip_size_kb": 0, 00:19:58.205 "state": "online", 00:19:58.205 "raid_level": "raid1", 00:19:58.205 "superblock": true, 00:19:58.205 "num_base_bdevs": 4, 00:19:58.205 "num_base_bdevs_discovered": 3, 00:19:58.205 "num_base_bdevs_operational": 3, 00:19:58.205 "process": { 00:19:58.205 "type": "rebuild", 00:19:58.205 "target": "spare", 00:19:58.205 "progress": { 00:19:58.205 "blocks": 20480, 00:19:58.205 "percent": 32 00:19:58.205 } 00:19:58.205 }, 00:19:58.205 "base_bdevs_list": [ 00:19:58.205 { 00:19:58.205 "name": "spare", 00:19:58.205 "uuid": "91011f87-369e-5f97-92f1-3c4e6c2eb5a2", 00:19:58.205 "is_configured": true, 00:19:58.205 "data_offset": 2048, 00:19:58.205 "data_size": 63488 00:19:58.205 }, 00:19:58.205 { 00:19:58.205 "name": null, 00:19:58.205 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:58.205 "is_configured": false, 00:19:58.205 "data_offset": 2048, 00:19:58.205 "data_size": 63488 00:19:58.205 }, 00:19:58.205 { 00:19:58.205 "name": "BaseBdev3", 00:19:58.205 "uuid": "c84d4388-6d5e-597e-bd6b-f9009998a1ac", 00:19:58.205 "is_configured": true, 00:19:58.205 "data_offset": 2048, 00:19:58.205 "data_size": 63488 00:19:58.205 }, 00:19:58.205 { 00:19:58.205 "name": "BaseBdev4", 00:19:58.205 "uuid": "a94b6492-fd42-5f05-a428-94faf6901bb8", 00:19:58.205 "is_configured": true, 00:19:58.205 "data_offset": 2048, 00:19:58.205 "data_size": 63488 00:19:58.205 } 00:19:58.205 ] 00:19:58.205 }' 00:19:58.205 13:39:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:58.205 13:39:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:58.205 13:39:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:58.205 13:39:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:58.205 13:39:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:19:58.205 13:39:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.205 13:39:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:58.205 [2024-11-20 13:39:57.670543] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:58.465 [2024-11-20 13:39:57.711482] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:58.465 [2024-11-20 13:39:57.711765] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:58.465 [2024-11-20 13:39:57.711792] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:58.465 [2024-11-20 13:39:57.711807] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:58.465 13:39:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.465 13:39:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:58.465 13:39:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:58.465 13:39:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:58.465 13:39:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:58.465 13:39:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:58.465 13:39:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:58.465 13:39:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:58.465 13:39:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:58.465 13:39:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:58.465 13:39:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:58.465 13:39:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:58.465 13:39:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:58.465 13:39:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.465 13:39:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:58.465 13:39:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.465 13:39:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:58.465 "name": "raid_bdev1", 00:19:58.465 "uuid": "6832da1e-ecc2-46a8-bda4-fb92ecf1e510", 00:19:58.465 "strip_size_kb": 0, 00:19:58.465 "state": "online", 00:19:58.465 "raid_level": "raid1", 00:19:58.465 "superblock": true, 00:19:58.465 "num_base_bdevs": 4, 00:19:58.465 "num_base_bdevs_discovered": 2, 00:19:58.465 "num_base_bdevs_operational": 2, 00:19:58.465 "base_bdevs_list": [ 00:19:58.465 { 00:19:58.465 "name": null, 00:19:58.465 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:58.465 "is_configured": false, 00:19:58.465 "data_offset": 0, 00:19:58.465 "data_size": 63488 00:19:58.465 }, 00:19:58.465 { 00:19:58.465 "name": null, 00:19:58.465 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:58.465 "is_configured": false, 00:19:58.465 "data_offset": 2048, 00:19:58.465 "data_size": 63488 00:19:58.465 }, 00:19:58.465 { 00:19:58.465 "name": "BaseBdev3", 00:19:58.465 "uuid": "c84d4388-6d5e-597e-bd6b-f9009998a1ac", 00:19:58.465 "is_configured": true, 00:19:58.465 "data_offset": 2048, 00:19:58.465 "data_size": 63488 00:19:58.465 }, 00:19:58.465 { 00:19:58.465 "name": "BaseBdev4", 00:19:58.465 "uuid": "a94b6492-fd42-5f05-a428-94faf6901bb8", 00:19:58.465 "is_configured": true, 00:19:58.465 "data_offset": 2048, 00:19:58.465 "data_size": 63488 00:19:58.465 } 00:19:58.465 ] 00:19:58.465 }' 00:19:58.465 13:39:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:58.465 13:39:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:58.725 13:39:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:19:58.725 13:39:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.725 13:39:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:58.725 [2024-11-20 13:39:58.166885] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:58.725 [2024-11-20 13:39:58.166964] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:58.725 [2024-11-20 13:39:58.166996] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:19:58.725 [2024-11-20 13:39:58.167012] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:58.725 [2024-11-20 13:39:58.167604] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:58.725 [2024-11-20 13:39:58.167635] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:58.725 [2024-11-20 13:39:58.167741] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:19:58.725 [2024-11-20 13:39:58.167765] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:19:58.725 [2024-11-20 13:39:58.167782] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:19:58.725 [2024-11-20 13:39:58.167813] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:58.725 [2024-11-20 13:39:58.184194] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037300 00:19:58.725 spare 00:19:58.725 13:39:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.725 13:39:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 00:19:58.725 [2024-11-20 13:39:58.186571] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:00.104 13:39:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:00.104 13:39:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:00.104 13:39:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:00.104 13:39:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:00.104 13:39:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:00.104 13:39:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:00.104 13:39:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:00.104 13:39:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.104 13:39:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:00.104 13:39:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.104 13:39:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:00.104 "name": "raid_bdev1", 00:20:00.104 "uuid": "6832da1e-ecc2-46a8-bda4-fb92ecf1e510", 00:20:00.104 "strip_size_kb": 0, 00:20:00.104 "state": "online", 00:20:00.104 "raid_level": "raid1", 00:20:00.104 "superblock": true, 00:20:00.104 "num_base_bdevs": 4, 00:20:00.104 "num_base_bdevs_discovered": 3, 00:20:00.104 "num_base_bdevs_operational": 3, 00:20:00.104 "process": { 00:20:00.104 "type": "rebuild", 00:20:00.104 "target": "spare", 00:20:00.104 "progress": { 00:20:00.104 "blocks": 20480, 00:20:00.104 "percent": 32 00:20:00.104 } 00:20:00.104 }, 00:20:00.104 "base_bdevs_list": [ 00:20:00.104 { 00:20:00.104 "name": "spare", 00:20:00.104 "uuid": "91011f87-369e-5f97-92f1-3c4e6c2eb5a2", 00:20:00.104 "is_configured": true, 00:20:00.104 "data_offset": 2048, 00:20:00.104 "data_size": 63488 00:20:00.104 }, 00:20:00.104 { 00:20:00.104 "name": null, 00:20:00.104 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:00.104 "is_configured": false, 00:20:00.104 "data_offset": 2048, 00:20:00.104 "data_size": 63488 00:20:00.104 }, 00:20:00.104 { 00:20:00.104 "name": "BaseBdev3", 00:20:00.104 "uuid": "c84d4388-6d5e-597e-bd6b-f9009998a1ac", 00:20:00.104 "is_configured": true, 00:20:00.104 "data_offset": 2048, 00:20:00.104 "data_size": 63488 00:20:00.104 }, 00:20:00.104 { 00:20:00.104 "name": "BaseBdev4", 00:20:00.104 "uuid": "a94b6492-fd42-5f05-a428-94faf6901bb8", 00:20:00.104 "is_configured": true, 00:20:00.104 "data_offset": 2048, 00:20:00.104 "data_size": 63488 00:20:00.104 } 00:20:00.104 ] 00:20:00.104 }' 00:20:00.104 13:39:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:00.104 13:39:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:00.104 13:39:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:00.104 13:39:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:00.104 13:39:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:20:00.104 13:39:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.104 13:39:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:00.104 [2024-11-20 13:39:59.342463] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:00.104 [2024-11-20 13:39:59.392320] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:20:00.104 [2024-11-20 13:39:59.392399] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:00.104 [2024-11-20 13:39:59.392423] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:00.104 [2024-11-20 13:39:59.392432] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:20:00.104 13:39:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.104 13:39:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:00.104 13:39:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:00.104 13:39:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:00.104 13:39:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:00.104 13:39:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:00.104 13:39:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:00.104 13:39:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:00.104 13:39:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:00.104 13:39:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:00.104 13:39:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:00.104 13:39:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:00.104 13:39:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:00.104 13:39:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.104 13:39:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:00.104 13:39:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.105 13:39:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:00.105 "name": "raid_bdev1", 00:20:00.105 "uuid": "6832da1e-ecc2-46a8-bda4-fb92ecf1e510", 00:20:00.105 "strip_size_kb": 0, 00:20:00.105 "state": "online", 00:20:00.105 "raid_level": "raid1", 00:20:00.105 "superblock": true, 00:20:00.105 "num_base_bdevs": 4, 00:20:00.105 "num_base_bdevs_discovered": 2, 00:20:00.105 "num_base_bdevs_operational": 2, 00:20:00.105 "base_bdevs_list": [ 00:20:00.105 { 00:20:00.105 "name": null, 00:20:00.105 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:00.105 "is_configured": false, 00:20:00.105 "data_offset": 0, 00:20:00.105 "data_size": 63488 00:20:00.105 }, 00:20:00.105 { 00:20:00.105 "name": null, 00:20:00.105 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:00.105 "is_configured": false, 00:20:00.105 "data_offset": 2048, 00:20:00.105 "data_size": 63488 00:20:00.105 }, 00:20:00.105 { 00:20:00.105 "name": "BaseBdev3", 00:20:00.105 "uuid": "c84d4388-6d5e-597e-bd6b-f9009998a1ac", 00:20:00.105 "is_configured": true, 00:20:00.105 "data_offset": 2048, 00:20:00.105 "data_size": 63488 00:20:00.105 }, 00:20:00.105 { 00:20:00.105 "name": "BaseBdev4", 00:20:00.105 "uuid": "a94b6492-fd42-5f05-a428-94faf6901bb8", 00:20:00.105 "is_configured": true, 00:20:00.105 "data_offset": 2048, 00:20:00.105 "data_size": 63488 00:20:00.105 } 00:20:00.105 ] 00:20:00.105 }' 00:20:00.105 13:39:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:00.105 13:39:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:00.712 13:39:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:00.712 13:39:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:00.712 13:39:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:00.713 13:39:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:00.713 13:39:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:00.713 13:39:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:00.713 13:39:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.713 13:39:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:00.713 13:39:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:00.713 13:39:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.713 13:39:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:00.713 "name": "raid_bdev1", 00:20:00.713 "uuid": "6832da1e-ecc2-46a8-bda4-fb92ecf1e510", 00:20:00.713 "strip_size_kb": 0, 00:20:00.713 "state": "online", 00:20:00.713 "raid_level": "raid1", 00:20:00.713 "superblock": true, 00:20:00.713 "num_base_bdevs": 4, 00:20:00.713 "num_base_bdevs_discovered": 2, 00:20:00.713 "num_base_bdevs_operational": 2, 00:20:00.713 "base_bdevs_list": [ 00:20:00.713 { 00:20:00.713 "name": null, 00:20:00.713 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:00.713 "is_configured": false, 00:20:00.713 "data_offset": 0, 00:20:00.713 "data_size": 63488 00:20:00.713 }, 00:20:00.713 { 00:20:00.713 "name": null, 00:20:00.713 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:00.713 "is_configured": false, 00:20:00.713 "data_offset": 2048, 00:20:00.713 "data_size": 63488 00:20:00.713 }, 00:20:00.713 { 00:20:00.713 "name": "BaseBdev3", 00:20:00.713 "uuid": "c84d4388-6d5e-597e-bd6b-f9009998a1ac", 00:20:00.713 "is_configured": true, 00:20:00.713 "data_offset": 2048, 00:20:00.713 "data_size": 63488 00:20:00.713 }, 00:20:00.713 { 00:20:00.713 "name": "BaseBdev4", 00:20:00.713 "uuid": "a94b6492-fd42-5f05-a428-94faf6901bb8", 00:20:00.713 "is_configured": true, 00:20:00.713 "data_offset": 2048, 00:20:00.713 "data_size": 63488 00:20:00.713 } 00:20:00.713 ] 00:20:00.713 }' 00:20:00.713 13:39:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:00.713 13:39:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:00.713 13:39:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:00.713 13:40:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:00.713 13:40:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:20:00.713 13:40:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.713 13:40:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:00.713 13:40:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.714 13:40:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:20:00.714 13:40:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.714 13:40:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:00.714 [2024-11-20 13:40:00.052048] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:20:00.714 [2024-11-20 13:40:00.052131] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:00.714 [2024-11-20 13:40:00.052159] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000cc80 00:20:00.714 [2024-11-20 13:40:00.052171] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:00.714 [2024-11-20 13:40:00.052713] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:00.714 [2024-11-20 13:40:00.052734] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:20:00.714 [2024-11-20 13:40:00.052834] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:20:00.714 [2024-11-20 13:40:00.052848] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:20:00.714 [2024-11-20 13:40:00.052861] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:20:00.714 [2024-11-20 13:40:00.052875] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:20:00.714 BaseBdev1 00:20:00.714 13:40:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.714 13:40:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:20:01.678 13:40:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:01.678 13:40:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:01.678 13:40:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:01.678 13:40:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:01.678 13:40:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:01.678 13:40:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:01.678 13:40:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:01.678 13:40:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:01.678 13:40:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:01.678 13:40:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:01.678 13:40:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:01.678 13:40:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:01.678 13:40:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.678 13:40:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:01.678 13:40:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.678 13:40:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:01.678 "name": "raid_bdev1", 00:20:01.678 "uuid": "6832da1e-ecc2-46a8-bda4-fb92ecf1e510", 00:20:01.679 "strip_size_kb": 0, 00:20:01.679 "state": "online", 00:20:01.679 "raid_level": "raid1", 00:20:01.679 "superblock": true, 00:20:01.679 "num_base_bdevs": 4, 00:20:01.679 "num_base_bdevs_discovered": 2, 00:20:01.679 "num_base_bdevs_operational": 2, 00:20:01.679 "base_bdevs_list": [ 00:20:01.679 { 00:20:01.679 "name": null, 00:20:01.679 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:01.679 "is_configured": false, 00:20:01.679 "data_offset": 0, 00:20:01.679 "data_size": 63488 00:20:01.679 }, 00:20:01.679 { 00:20:01.679 "name": null, 00:20:01.679 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:01.679 "is_configured": false, 00:20:01.679 "data_offset": 2048, 00:20:01.679 "data_size": 63488 00:20:01.679 }, 00:20:01.679 { 00:20:01.679 "name": "BaseBdev3", 00:20:01.679 "uuid": "c84d4388-6d5e-597e-bd6b-f9009998a1ac", 00:20:01.679 "is_configured": true, 00:20:01.679 "data_offset": 2048, 00:20:01.679 "data_size": 63488 00:20:01.679 }, 00:20:01.679 { 00:20:01.679 "name": "BaseBdev4", 00:20:01.679 "uuid": "a94b6492-fd42-5f05-a428-94faf6901bb8", 00:20:01.679 "is_configured": true, 00:20:01.679 "data_offset": 2048, 00:20:01.679 "data_size": 63488 00:20:01.679 } 00:20:01.679 ] 00:20:01.679 }' 00:20:01.679 13:40:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:01.679 13:40:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:02.247 13:40:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:02.247 13:40:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:02.247 13:40:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:02.247 13:40:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:02.247 13:40:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:02.247 13:40:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:02.247 13:40:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.247 13:40:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:02.247 13:40:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:02.247 13:40:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.247 13:40:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:02.247 "name": "raid_bdev1", 00:20:02.247 "uuid": "6832da1e-ecc2-46a8-bda4-fb92ecf1e510", 00:20:02.247 "strip_size_kb": 0, 00:20:02.247 "state": "online", 00:20:02.247 "raid_level": "raid1", 00:20:02.247 "superblock": true, 00:20:02.247 "num_base_bdevs": 4, 00:20:02.247 "num_base_bdevs_discovered": 2, 00:20:02.247 "num_base_bdevs_operational": 2, 00:20:02.247 "base_bdevs_list": [ 00:20:02.247 { 00:20:02.247 "name": null, 00:20:02.247 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:02.247 "is_configured": false, 00:20:02.247 "data_offset": 0, 00:20:02.247 "data_size": 63488 00:20:02.247 }, 00:20:02.247 { 00:20:02.247 "name": null, 00:20:02.247 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:02.247 "is_configured": false, 00:20:02.247 "data_offset": 2048, 00:20:02.247 "data_size": 63488 00:20:02.247 }, 00:20:02.247 { 00:20:02.247 "name": "BaseBdev3", 00:20:02.247 "uuid": "c84d4388-6d5e-597e-bd6b-f9009998a1ac", 00:20:02.247 "is_configured": true, 00:20:02.247 "data_offset": 2048, 00:20:02.247 "data_size": 63488 00:20:02.247 }, 00:20:02.247 { 00:20:02.247 "name": "BaseBdev4", 00:20:02.247 "uuid": "a94b6492-fd42-5f05-a428-94faf6901bb8", 00:20:02.247 "is_configured": true, 00:20:02.247 "data_offset": 2048, 00:20:02.247 "data_size": 63488 00:20:02.247 } 00:20:02.247 ] 00:20:02.247 }' 00:20:02.247 13:40:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:02.247 13:40:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:02.247 13:40:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:02.247 13:40:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:02.247 13:40:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:20:02.247 13:40:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # local es=0 00:20:02.247 13:40:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:20:02.247 13:40:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:20:02.247 13:40:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:02.247 13:40:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:20:02.247 13:40:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:02.247 13:40:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:20:02.248 13:40:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.248 13:40:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:02.248 [2024-11-20 13:40:01.638455] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:02.248 [2024-11-20 13:40:01.638804] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:20:02.248 [2024-11-20 13:40:01.638836] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:20:02.248 request: 00:20:02.248 { 00:20:02.248 "base_bdev": "BaseBdev1", 00:20:02.248 "raid_bdev": "raid_bdev1", 00:20:02.248 "method": "bdev_raid_add_base_bdev", 00:20:02.248 "req_id": 1 00:20:02.248 } 00:20:02.248 Got JSON-RPC error response 00:20:02.248 response: 00:20:02.248 { 00:20:02.248 "code": -22, 00:20:02.248 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:20:02.248 } 00:20:02.248 13:40:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:20:02.248 13:40:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # es=1 00:20:02.248 13:40:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:02.248 13:40:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:02.248 13:40:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:02.248 13:40:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 00:20:03.184 13:40:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:03.184 13:40:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:03.184 13:40:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:03.184 13:40:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:03.184 13:40:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:03.184 13:40:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:03.184 13:40:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:03.184 13:40:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:03.184 13:40:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:03.184 13:40:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:03.184 13:40:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:03.184 13:40:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:03.184 13:40:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.184 13:40:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:03.444 13:40:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.444 13:40:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:03.444 "name": "raid_bdev1", 00:20:03.444 "uuid": "6832da1e-ecc2-46a8-bda4-fb92ecf1e510", 00:20:03.444 "strip_size_kb": 0, 00:20:03.444 "state": "online", 00:20:03.444 "raid_level": "raid1", 00:20:03.444 "superblock": true, 00:20:03.444 "num_base_bdevs": 4, 00:20:03.444 "num_base_bdevs_discovered": 2, 00:20:03.444 "num_base_bdevs_operational": 2, 00:20:03.444 "base_bdevs_list": [ 00:20:03.444 { 00:20:03.444 "name": null, 00:20:03.444 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:03.444 "is_configured": false, 00:20:03.444 "data_offset": 0, 00:20:03.444 "data_size": 63488 00:20:03.444 }, 00:20:03.444 { 00:20:03.444 "name": null, 00:20:03.444 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:03.444 "is_configured": false, 00:20:03.444 "data_offset": 2048, 00:20:03.444 "data_size": 63488 00:20:03.444 }, 00:20:03.444 { 00:20:03.444 "name": "BaseBdev3", 00:20:03.444 "uuid": "c84d4388-6d5e-597e-bd6b-f9009998a1ac", 00:20:03.444 "is_configured": true, 00:20:03.444 "data_offset": 2048, 00:20:03.444 "data_size": 63488 00:20:03.444 }, 00:20:03.444 { 00:20:03.444 "name": "BaseBdev4", 00:20:03.444 "uuid": "a94b6492-fd42-5f05-a428-94faf6901bb8", 00:20:03.444 "is_configured": true, 00:20:03.444 "data_offset": 2048, 00:20:03.444 "data_size": 63488 00:20:03.444 } 00:20:03.444 ] 00:20:03.444 }' 00:20:03.444 13:40:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:03.444 13:40:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:03.703 13:40:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:03.703 13:40:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:03.703 13:40:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:03.703 13:40:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:03.703 13:40:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:03.703 13:40:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:03.703 13:40:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:03.703 13:40:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.703 13:40:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:03.703 13:40:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.703 13:40:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:03.703 "name": "raid_bdev1", 00:20:03.703 "uuid": "6832da1e-ecc2-46a8-bda4-fb92ecf1e510", 00:20:03.703 "strip_size_kb": 0, 00:20:03.703 "state": "online", 00:20:03.703 "raid_level": "raid1", 00:20:03.703 "superblock": true, 00:20:03.703 "num_base_bdevs": 4, 00:20:03.703 "num_base_bdevs_discovered": 2, 00:20:03.703 "num_base_bdevs_operational": 2, 00:20:03.703 "base_bdevs_list": [ 00:20:03.703 { 00:20:03.703 "name": null, 00:20:03.703 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:03.703 "is_configured": false, 00:20:03.703 "data_offset": 0, 00:20:03.703 "data_size": 63488 00:20:03.703 }, 00:20:03.703 { 00:20:03.703 "name": null, 00:20:03.703 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:03.703 "is_configured": false, 00:20:03.703 "data_offset": 2048, 00:20:03.703 "data_size": 63488 00:20:03.703 }, 00:20:03.703 { 00:20:03.703 "name": "BaseBdev3", 00:20:03.703 "uuid": "c84d4388-6d5e-597e-bd6b-f9009998a1ac", 00:20:03.703 "is_configured": true, 00:20:03.703 "data_offset": 2048, 00:20:03.703 "data_size": 63488 00:20:03.703 }, 00:20:03.703 { 00:20:03.703 "name": "BaseBdev4", 00:20:03.703 "uuid": "a94b6492-fd42-5f05-a428-94faf6901bb8", 00:20:03.703 "is_configured": true, 00:20:03.703 "data_offset": 2048, 00:20:03.703 "data_size": 63488 00:20:03.703 } 00:20:03.703 ] 00:20:03.703 }' 00:20:03.703 13:40:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:03.703 13:40:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:03.703 13:40:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:03.961 13:40:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:03.961 13:40:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 78966 00:20:03.961 13:40:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # '[' -z 78966 ']' 00:20:03.961 13:40:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@958 -- # kill -0 78966 00:20:03.961 13:40:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # uname 00:20:03.961 13:40:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:03.961 13:40:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78966 00:20:03.961 killing process with pid 78966 00:20:03.961 Received shutdown signal, test time was about 17.962923 seconds 00:20:03.961 00:20:03.961 Latency(us) 00:20:03.961 [2024-11-20T13:40:03.446Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:03.961 [2024-11-20T13:40:03.446Z] =================================================================================================================== 00:20:03.961 [2024-11-20T13:40:03.446Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:03.961 13:40:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:03.961 13:40:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:03.961 13:40:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78966' 00:20:03.961 13:40:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@973 -- # kill 78966 00:20:03.961 [2024-11-20 13:40:03.257314] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:03.961 13:40:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@978 -- # wait 78966 00:20:03.961 [2024-11-20 13:40:03.257499] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:03.961 [2024-11-20 13:40:03.257565] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:03.961 [2024-11-20 13:40:03.257584] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:20:04.526 [2024-11-20 13:40:03.709057] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:05.897 13:40:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 00:20:05.897 00:20:05.897 real 0m21.556s 00:20:05.897 user 0m27.990s 00:20:05.897 sys 0m2.946s 00:20:05.897 13:40:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:05.897 ************************************ 00:20:05.897 13:40:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:05.897 END TEST raid_rebuild_test_sb_io 00:20:05.897 ************************************ 00:20:05.897 13:40:05 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 00:20:05.897 13:40:05 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 3 false 00:20:05.897 13:40:05 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:20:05.897 13:40:05 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:05.897 13:40:05 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:05.897 ************************************ 00:20:05.897 START TEST raid5f_state_function_test 00:20:05.897 ************************************ 00:20:05.897 13:40:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 3 false 00:20:05.897 13:40:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:20:05.898 13:40:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:20:05.898 13:40:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:20:05.898 13:40:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:20:05.898 13:40:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:20:05.898 13:40:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:20:05.898 13:40:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:20:05.898 13:40:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:20:05.898 13:40:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:20:05.898 13:40:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:20:05.898 13:40:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:20:05.898 13:40:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:20:05.898 13:40:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:20:05.898 13:40:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:20:05.898 13:40:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:20:05.898 13:40:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:20:05.898 13:40:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:20:05.898 13:40:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:20:05.898 13:40:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:20:05.898 13:40:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:20:05.898 13:40:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:20:05.898 13:40:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:20:05.898 13:40:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:20:05.898 13:40:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:20:05.898 13:40:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:20:05.898 13:40:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:20:05.898 13:40:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=79688 00:20:05.898 13:40:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:20:05.898 13:40:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 79688' 00:20:05.898 Process raid pid: 79688 00:20:05.898 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:05.898 13:40:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 79688 00:20:05.898 13:40:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 79688 ']' 00:20:05.898 13:40:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:05.898 13:40:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:05.898 13:40:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:05.898 13:40:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:05.898 13:40:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:05.898 [2024-11-20 13:40:05.184971] Starting SPDK v25.01-pre git sha1 82b85d9ca / DPDK 24.03.0 initialization... 00:20:05.898 [2024-11-20 13:40:05.185328] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:05.898 [2024-11-20 13:40:05.370900] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:06.157 [2024-11-20 13:40:05.497536] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:06.416 [2024-11-20 13:40:05.736271] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:06.416 [2024-11-20 13:40:05.736306] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:06.674 13:40:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:06.674 13:40:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:20:06.674 13:40:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:20:06.674 13:40:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.674 13:40:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:06.674 [2024-11-20 13:40:06.069996] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:06.674 [2024-11-20 13:40:06.070074] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:06.674 [2024-11-20 13:40:06.070091] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:06.674 [2024-11-20 13:40:06.070108] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:06.674 [2024-11-20 13:40:06.070120] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:06.674 [2024-11-20 13:40:06.070136] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:06.674 13:40:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.674 13:40:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:20:06.674 13:40:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:06.674 13:40:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:06.674 13:40:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:06.674 13:40:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:06.674 13:40:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:06.674 13:40:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:06.674 13:40:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:06.674 13:40:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:06.674 13:40:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:06.674 13:40:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:06.674 13:40:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.674 13:40:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:06.674 13:40:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:06.674 13:40:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.674 13:40:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:06.674 "name": "Existed_Raid", 00:20:06.674 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:06.674 "strip_size_kb": 64, 00:20:06.674 "state": "configuring", 00:20:06.674 "raid_level": "raid5f", 00:20:06.674 "superblock": false, 00:20:06.674 "num_base_bdevs": 3, 00:20:06.674 "num_base_bdevs_discovered": 0, 00:20:06.674 "num_base_bdevs_operational": 3, 00:20:06.674 "base_bdevs_list": [ 00:20:06.674 { 00:20:06.674 "name": "BaseBdev1", 00:20:06.674 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:06.674 "is_configured": false, 00:20:06.674 "data_offset": 0, 00:20:06.674 "data_size": 0 00:20:06.674 }, 00:20:06.674 { 00:20:06.674 "name": "BaseBdev2", 00:20:06.674 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:06.674 "is_configured": false, 00:20:06.674 "data_offset": 0, 00:20:06.674 "data_size": 0 00:20:06.674 }, 00:20:06.674 { 00:20:06.674 "name": "BaseBdev3", 00:20:06.674 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:06.674 "is_configured": false, 00:20:06.674 "data_offset": 0, 00:20:06.674 "data_size": 0 00:20:06.674 } 00:20:06.674 ] 00:20:06.674 }' 00:20:06.674 13:40:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:06.674 13:40:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:07.248 13:40:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:20:07.248 13:40:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.248 13:40:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:07.248 [2024-11-20 13:40:06.525340] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:07.248 [2024-11-20 13:40:06.525599] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:20:07.248 13:40:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.248 13:40:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:20:07.248 13:40:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.248 13:40:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:07.248 [2024-11-20 13:40:06.537305] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:07.248 [2024-11-20 13:40:06.537475] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:07.248 [2024-11-20 13:40:06.537580] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:07.248 [2024-11-20 13:40:06.537629] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:07.248 [2024-11-20 13:40:06.537705] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:07.248 [2024-11-20 13:40:06.537749] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:07.248 13:40:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.248 13:40:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:20:07.248 13:40:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.248 13:40:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:07.248 [2024-11-20 13:40:06.589662] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:07.248 BaseBdev1 00:20:07.248 13:40:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.248 13:40:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:20:07.248 13:40:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:20:07.248 13:40:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:20:07.248 13:40:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:20:07.248 13:40:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:20:07.248 13:40:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:20:07.248 13:40:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:20:07.248 13:40:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.248 13:40:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:07.248 13:40:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.248 13:40:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:20:07.248 13:40:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.248 13:40:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:07.248 [ 00:20:07.248 { 00:20:07.248 "name": "BaseBdev1", 00:20:07.248 "aliases": [ 00:20:07.248 "785ef980-2091-4755-98c6-3c4f1f88f62f" 00:20:07.248 ], 00:20:07.248 "product_name": "Malloc disk", 00:20:07.248 "block_size": 512, 00:20:07.248 "num_blocks": 65536, 00:20:07.248 "uuid": "785ef980-2091-4755-98c6-3c4f1f88f62f", 00:20:07.248 "assigned_rate_limits": { 00:20:07.248 "rw_ios_per_sec": 0, 00:20:07.248 "rw_mbytes_per_sec": 0, 00:20:07.248 "r_mbytes_per_sec": 0, 00:20:07.248 "w_mbytes_per_sec": 0 00:20:07.248 }, 00:20:07.248 "claimed": true, 00:20:07.248 "claim_type": "exclusive_write", 00:20:07.248 "zoned": false, 00:20:07.248 "supported_io_types": { 00:20:07.248 "read": true, 00:20:07.248 "write": true, 00:20:07.248 "unmap": true, 00:20:07.248 "flush": true, 00:20:07.248 "reset": true, 00:20:07.248 "nvme_admin": false, 00:20:07.248 "nvme_io": false, 00:20:07.248 "nvme_io_md": false, 00:20:07.248 "write_zeroes": true, 00:20:07.248 "zcopy": true, 00:20:07.248 "get_zone_info": false, 00:20:07.248 "zone_management": false, 00:20:07.248 "zone_append": false, 00:20:07.248 "compare": false, 00:20:07.248 "compare_and_write": false, 00:20:07.248 "abort": true, 00:20:07.248 "seek_hole": false, 00:20:07.248 "seek_data": false, 00:20:07.248 "copy": true, 00:20:07.248 "nvme_iov_md": false 00:20:07.248 }, 00:20:07.248 "memory_domains": [ 00:20:07.248 { 00:20:07.248 "dma_device_id": "system", 00:20:07.248 "dma_device_type": 1 00:20:07.248 }, 00:20:07.248 { 00:20:07.248 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:07.248 "dma_device_type": 2 00:20:07.248 } 00:20:07.248 ], 00:20:07.248 "driver_specific": {} 00:20:07.248 } 00:20:07.249 ] 00:20:07.249 13:40:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.249 13:40:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:20:07.249 13:40:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:20:07.249 13:40:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:07.249 13:40:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:07.249 13:40:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:07.249 13:40:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:07.249 13:40:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:07.249 13:40:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:07.249 13:40:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:07.249 13:40:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:07.249 13:40:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:07.249 13:40:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:07.249 13:40:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:07.249 13:40:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.249 13:40:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:07.249 13:40:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.249 13:40:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:07.249 "name": "Existed_Raid", 00:20:07.249 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:07.249 "strip_size_kb": 64, 00:20:07.249 "state": "configuring", 00:20:07.249 "raid_level": "raid5f", 00:20:07.249 "superblock": false, 00:20:07.249 "num_base_bdevs": 3, 00:20:07.249 "num_base_bdevs_discovered": 1, 00:20:07.249 "num_base_bdevs_operational": 3, 00:20:07.249 "base_bdevs_list": [ 00:20:07.249 { 00:20:07.249 "name": "BaseBdev1", 00:20:07.249 "uuid": "785ef980-2091-4755-98c6-3c4f1f88f62f", 00:20:07.249 "is_configured": true, 00:20:07.249 "data_offset": 0, 00:20:07.249 "data_size": 65536 00:20:07.249 }, 00:20:07.249 { 00:20:07.249 "name": "BaseBdev2", 00:20:07.249 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:07.249 "is_configured": false, 00:20:07.249 "data_offset": 0, 00:20:07.249 "data_size": 0 00:20:07.249 }, 00:20:07.249 { 00:20:07.249 "name": "BaseBdev3", 00:20:07.249 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:07.249 "is_configured": false, 00:20:07.249 "data_offset": 0, 00:20:07.249 "data_size": 0 00:20:07.249 } 00:20:07.249 ] 00:20:07.249 }' 00:20:07.249 13:40:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:07.249 13:40:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:07.816 13:40:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:20:07.816 13:40:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.816 13:40:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:07.816 [2024-11-20 13:40:07.148982] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:07.816 [2024-11-20 13:40:07.149038] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:20:07.816 13:40:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.816 13:40:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:20:07.816 13:40:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.816 13:40:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:07.816 [2024-11-20 13:40:07.161008] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:07.816 [2024-11-20 13:40:07.163243] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:07.816 [2024-11-20 13:40:07.163293] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:07.816 [2024-11-20 13:40:07.163306] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:07.816 [2024-11-20 13:40:07.163319] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:07.816 13:40:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.816 13:40:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:20:07.816 13:40:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:20:07.816 13:40:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:20:07.816 13:40:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:07.816 13:40:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:07.816 13:40:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:07.817 13:40:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:07.817 13:40:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:07.817 13:40:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:07.817 13:40:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:07.817 13:40:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:07.817 13:40:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:07.817 13:40:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:07.817 13:40:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:07.817 13:40:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.817 13:40:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:07.817 13:40:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.817 13:40:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:07.817 "name": "Existed_Raid", 00:20:07.817 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:07.817 "strip_size_kb": 64, 00:20:07.817 "state": "configuring", 00:20:07.817 "raid_level": "raid5f", 00:20:07.817 "superblock": false, 00:20:07.817 "num_base_bdevs": 3, 00:20:07.817 "num_base_bdevs_discovered": 1, 00:20:07.817 "num_base_bdevs_operational": 3, 00:20:07.817 "base_bdevs_list": [ 00:20:07.817 { 00:20:07.817 "name": "BaseBdev1", 00:20:07.817 "uuid": "785ef980-2091-4755-98c6-3c4f1f88f62f", 00:20:07.817 "is_configured": true, 00:20:07.817 "data_offset": 0, 00:20:07.817 "data_size": 65536 00:20:07.817 }, 00:20:07.817 { 00:20:07.817 "name": "BaseBdev2", 00:20:07.817 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:07.817 "is_configured": false, 00:20:07.817 "data_offset": 0, 00:20:07.817 "data_size": 0 00:20:07.817 }, 00:20:07.817 { 00:20:07.817 "name": "BaseBdev3", 00:20:07.817 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:07.817 "is_configured": false, 00:20:07.817 "data_offset": 0, 00:20:07.817 "data_size": 0 00:20:07.817 } 00:20:07.817 ] 00:20:07.817 }' 00:20:07.817 13:40:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:07.817 13:40:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:08.384 13:40:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:20:08.384 13:40:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.384 13:40:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:08.384 [2024-11-20 13:40:07.661892] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:08.384 BaseBdev2 00:20:08.384 13:40:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.385 13:40:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:20:08.385 13:40:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:20:08.385 13:40:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:20:08.385 13:40:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:20:08.385 13:40:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:20:08.385 13:40:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:20:08.385 13:40:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:20:08.385 13:40:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.385 13:40:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:08.385 13:40:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.385 13:40:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:20:08.385 13:40:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.385 13:40:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:08.385 [ 00:20:08.385 { 00:20:08.385 "name": "BaseBdev2", 00:20:08.385 "aliases": [ 00:20:08.385 "30f153ef-d769-4a2e-b785-f56b921330d5" 00:20:08.385 ], 00:20:08.385 "product_name": "Malloc disk", 00:20:08.385 "block_size": 512, 00:20:08.385 "num_blocks": 65536, 00:20:08.385 "uuid": "30f153ef-d769-4a2e-b785-f56b921330d5", 00:20:08.385 "assigned_rate_limits": { 00:20:08.385 "rw_ios_per_sec": 0, 00:20:08.385 "rw_mbytes_per_sec": 0, 00:20:08.385 "r_mbytes_per_sec": 0, 00:20:08.385 "w_mbytes_per_sec": 0 00:20:08.385 }, 00:20:08.385 "claimed": true, 00:20:08.385 "claim_type": "exclusive_write", 00:20:08.385 "zoned": false, 00:20:08.385 "supported_io_types": { 00:20:08.385 "read": true, 00:20:08.385 "write": true, 00:20:08.385 "unmap": true, 00:20:08.385 "flush": true, 00:20:08.385 "reset": true, 00:20:08.385 "nvme_admin": false, 00:20:08.385 "nvme_io": false, 00:20:08.385 "nvme_io_md": false, 00:20:08.385 "write_zeroes": true, 00:20:08.385 "zcopy": true, 00:20:08.385 "get_zone_info": false, 00:20:08.385 "zone_management": false, 00:20:08.385 "zone_append": false, 00:20:08.385 "compare": false, 00:20:08.385 "compare_and_write": false, 00:20:08.385 "abort": true, 00:20:08.385 "seek_hole": false, 00:20:08.385 "seek_data": false, 00:20:08.385 "copy": true, 00:20:08.385 "nvme_iov_md": false 00:20:08.385 }, 00:20:08.385 "memory_domains": [ 00:20:08.385 { 00:20:08.385 "dma_device_id": "system", 00:20:08.385 "dma_device_type": 1 00:20:08.385 }, 00:20:08.385 { 00:20:08.385 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:08.385 "dma_device_type": 2 00:20:08.385 } 00:20:08.385 ], 00:20:08.385 "driver_specific": {} 00:20:08.385 } 00:20:08.385 ] 00:20:08.385 13:40:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.385 13:40:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:20:08.385 13:40:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:20:08.385 13:40:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:20:08.385 13:40:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:20:08.385 13:40:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:08.385 13:40:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:08.385 13:40:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:08.385 13:40:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:08.385 13:40:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:08.385 13:40:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:08.385 13:40:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:08.385 13:40:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:08.385 13:40:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:08.385 13:40:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:08.385 13:40:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.385 13:40:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:08.385 13:40:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:08.385 13:40:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.385 13:40:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:08.385 "name": "Existed_Raid", 00:20:08.385 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:08.385 "strip_size_kb": 64, 00:20:08.385 "state": "configuring", 00:20:08.385 "raid_level": "raid5f", 00:20:08.385 "superblock": false, 00:20:08.385 "num_base_bdevs": 3, 00:20:08.385 "num_base_bdevs_discovered": 2, 00:20:08.385 "num_base_bdevs_operational": 3, 00:20:08.385 "base_bdevs_list": [ 00:20:08.385 { 00:20:08.385 "name": "BaseBdev1", 00:20:08.385 "uuid": "785ef980-2091-4755-98c6-3c4f1f88f62f", 00:20:08.385 "is_configured": true, 00:20:08.385 "data_offset": 0, 00:20:08.385 "data_size": 65536 00:20:08.385 }, 00:20:08.385 { 00:20:08.385 "name": "BaseBdev2", 00:20:08.385 "uuid": "30f153ef-d769-4a2e-b785-f56b921330d5", 00:20:08.385 "is_configured": true, 00:20:08.385 "data_offset": 0, 00:20:08.385 "data_size": 65536 00:20:08.385 }, 00:20:08.385 { 00:20:08.385 "name": "BaseBdev3", 00:20:08.385 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:08.385 "is_configured": false, 00:20:08.385 "data_offset": 0, 00:20:08.385 "data_size": 0 00:20:08.385 } 00:20:08.385 ] 00:20:08.385 }' 00:20:08.385 13:40:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:08.385 13:40:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:08.953 13:40:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:20:08.953 13:40:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.953 13:40:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:08.953 [2024-11-20 13:40:08.243211] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:08.953 [2024-11-20 13:40:08.243276] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:20:08.953 [2024-11-20 13:40:08.243295] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:20:08.953 [2024-11-20 13:40:08.243589] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:20:08.953 [2024-11-20 13:40:08.249646] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:20:08.953 [2024-11-20 13:40:08.249684] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:20:08.953 [2024-11-20 13:40:08.249986] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:08.953 BaseBdev3 00:20:08.953 13:40:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.953 13:40:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:20:08.953 13:40:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:20:08.953 13:40:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:20:08.953 13:40:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:20:08.953 13:40:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:20:08.953 13:40:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:20:08.953 13:40:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:20:08.953 13:40:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.953 13:40:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:08.953 13:40:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.953 13:40:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:20:08.953 13:40:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.953 13:40:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:08.953 [ 00:20:08.953 { 00:20:08.953 "name": "BaseBdev3", 00:20:08.953 "aliases": [ 00:20:08.953 "be64088c-c881-4b03-9acd-f3bbea168438" 00:20:08.953 ], 00:20:08.953 "product_name": "Malloc disk", 00:20:08.953 "block_size": 512, 00:20:08.953 "num_blocks": 65536, 00:20:08.953 "uuid": "be64088c-c881-4b03-9acd-f3bbea168438", 00:20:08.953 "assigned_rate_limits": { 00:20:08.953 "rw_ios_per_sec": 0, 00:20:08.953 "rw_mbytes_per_sec": 0, 00:20:08.953 "r_mbytes_per_sec": 0, 00:20:08.953 "w_mbytes_per_sec": 0 00:20:08.953 }, 00:20:08.953 "claimed": true, 00:20:08.953 "claim_type": "exclusive_write", 00:20:08.953 "zoned": false, 00:20:08.953 "supported_io_types": { 00:20:08.953 "read": true, 00:20:08.953 "write": true, 00:20:08.953 "unmap": true, 00:20:08.953 "flush": true, 00:20:08.953 "reset": true, 00:20:08.953 "nvme_admin": false, 00:20:08.953 "nvme_io": false, 00:20:08.953 "nvme_io_md": false, 00:20:08.953 "write_zeroes": true, 00:20:08.953 "zcopy": true, 00:20:08.953 "get_zone_info": false, 00:20:08.953 "zone_management": false, 00:20:08.953 "zone_append": false, 00:20:08.953 "compare": false, 00:20:08.953 "compare_and_write": false, 00:20:08.953 "abort": true, 00:20:08.953 "seek_hole": false, 00:20:08.953 "seek_data": false, 00:20:08.953 "copy": true, 00:20:08.953 "nvme_iov_md": false 00:20:08.953 }, 00:20:08.953 "memory_domains": [ 00:20:08.953 { 00:20:08.953 "dma_device_id": "system", 00:20:08.953 "dma_device_type": 1 00:20:08.953 }, 00:20:08.953 { 00:20:08.953 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:08.953 "dma_device_type": 2 00:20:08.953 } 00:20:08.953 ], 00:20:08.953 "driver_specific": {} 00:20:08.953 } 00:20:08.953 ] 00:20:08.953 13:40:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.953 13:40:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:20:08.953 13:40:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:20:08.953 13:40:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:20:08.953 13:40:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:20:08.953 13:40:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:08.953 13:40:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:08.953 13:40:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:08.953 13:40:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:08.953 13:40:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:08.953 13:40:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:08.953 13:40:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:08.953 13:40:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:08.953 13:40:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:08.953 13:40:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:08.953 13:40:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:08.953 13:40:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.953 13:40:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:08.953 13:40:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.953 13:40:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:08.953 "name": "Existed_Raid", 00:20:08.953 "uuid": "eb2bcfa6-b384-42a7-a55e-4e4dc22b8782", 00:20:08.953 "strip_size_kb": 64, 00:20:08.953 "state": "online", 00:20:08.953 "raid_level": "raid5f", 00:20:08.953 "superblock": false, 00:20:08.953 "num_base_bdevs": 3, 00:20:08.953 "num_base_bdevs_discovered": 3, 00:20:08.953 "num_base_bdevs_operational": 3, 00:20:08.953 "base_bdevs_list": [ 00:20:08.953 { 00:20:08.953 "name": "BaseBdev1", 00:20:08.953 "uuid": "785ef980-2091-4755-98c6-3c4f1f88f62f", 00:20:08.953 "is_configured": true, 00:20:08.953 "data_offset": 0, 00:20:08.953 "data_size": 65536 00:20:08.953 }, 00:20:08.953 { 00:20:08.953 "name": "BaseBdev2", 00:20:08.953 "uuid": "30f153ef-d769-4a2e-b785-f56b921330d5", 00:20:08.953 "is_configured": true, 00:20:08.953 "data_offset": 0, 00:20:08.953 "data_size": 65536 00:20:08.953 }, 00:20:08.953 { 00:20:08.953 "name": "BaseBdev3", 00:20:08.953 "uuid": "be64088c-c881-4b03-9acd-f3bbea168438", 00:20:08.953 "is_configured": true, 00:20:08.953 "data_offset": 0, 00:20:08.953 "data_size": 65536 00:20:08.953 } 00:20:08.953 ] 00:20:08.953 }' 00:20:08.953 13:40:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:08.953 13:40:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:09.520 13:40:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:20:09.520 13:40:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:20:09.520 13:40:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:20:09.520 13:40:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:20:09.520 13:40:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:20:09.520 13:40:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:20:09.520 13:40:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:20:09.520 13:40:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.520 13:40:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:20:09.520 13:40:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:09.520 [2024-11-20 13:40:08.756326] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:09.520 13:40:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.520 13:40:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:09.520 "name": "Existed_Raid", 00:20:09.520 "aliases": [ 00:20:09.520 "eb2bcfa6-b384-42a7-a55e-4e4dc22b8782" 00:20:09.520 ], 00:20:09.520 "product_name": "Raid Volume", 00:20:09.520 "block_size": 512, 00:20:09.520 "num_blocks": 131072, 00:20:09.520 "uuid": "eb2bcfa6-b384-42a7-a55e-4e4dc22b8782", 00:20:09.520 "assigned_rate_limits": { 00:20:09.520 "rw_ios_per_sec": 0, 00:20:09.520 "rw_mbytes_per_sec": 0, 00:20:09.520 "r_mbytes_per_sec": 0, 00:20:09.520 "w_mbytes_per_sec": 0 00:20:09.520 }, 00:20:09.520 "claimed": false, 00:20:09.520 "zoned": false, 00:20:09.520 "supported_io_types": { 00:20:09.520 "read": true, 00:20:09.520 "write": true, 00:20:09.520 "unmap": false, 00:20:09.520 "flush": false, 00:20:09.520 "reset": true, 00:20:09.520 "nvme_admin": false, 00:20:09.520 "nvme_io": false, 00:20:09.520 "nvme_io_md": false, 00:20:09.520 "write_zeroes": true, 00:20:09.520 "zcopy": false, 00:20:09.520 "get_zone_info": false, 00:20:09.520 "zone_management": false, 00:20:09.520 "zone_append": false, 00:20:09.520 "compare": false, 00:20:09.520 "compare_and_write": false, 00:20:09.520 "abort": false, 00:20:09.520 "seek_hole": false, 00:20:09.520 "seek_data": false, 00:20:09.520 "copy": false, 00:20:09.520 "nvme_iov_md": false 00:20:09.520 }, 00:20:09.520 "driver_specific": { 00:20:09.520 "raid": { 00:20:09.520 "uuid": "eb2bcfa6-b384-42a7-a55e-4e4dc22b8782", 00:20:09.520 "strip_size_kb": 64, 00:20:09.520 "state": "online", 00:20:09.520 "raid_level": "raid5f", 00:20:09.520 "superblock": false, 00:20:09.520 "num_base_bdevs": 3, 00:20:09.520 "num_base_bdevs_discovered": 3, 00:20:09.520 "num_base_bdevs_operational": 3, 00:20:09.520 "base_bdevs_list": [ 00:20:09.520 { 00:20:09.520 "name": "BaseBdev1", 00:20:09.520 "uuid": "785ef980-2091-4755-98c6-3c4f1f88f62f", 00:20:09.520 "is_configured": true, 00:20:09.520 "data_offset": 0, 00:20:09.520 "data_size": 65536 00:20:09.520 }, 00:20:09.520 { 00:20:09.520 "name": "BaseBdev2", 00:20:09.520 "uuid": "30f153ef-d769-4a2e-b785-f56b921330d5", 00:20:09.520 "is_configured": true, 00:20:09.520 "data_offset": 0, 00:20:09.520 "data_size": 65536 00:20:09.520 }, 00:20:09.520 { 00:20:09.520 "name": "BaseBdev3", 00:20:09.520 "uuid": "be64088c-c881-4b03-9acd-f3bbea168438", 00:20:09.520 "is_configured": true, 00:20:09.520 "data_offset": 0, 00:20:09.520 "data_size": 65536 00:20:09.520 } 00:20:09.520 ] 00:20:09.520 } 00:20:09.520 } 00:20:09.520 }' 00:20:09.520 13:40:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:09.520 13:40:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:20:09.520 BaseBdev2 00:20:09.520 BaseBdev3' 00:20:09.520 13:40:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:09.520 13:40:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:20:09.520 13:40:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:09.520 13:40:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:20:09.520 13:40:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:09.520 13:40:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.520 13:40:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:09.520 13:40:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.520 13:40:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:09.520 13:40:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:09.520 13:40:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:09.520 13:40:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:20:09.520 13:40:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:09.520 13:40:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.520 13:40:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:09.520 13:40:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.520 13:40:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:09.520 13:40:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:09.520 13:40:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:09.520 13:40:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:09.520 13:40:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:20:09.520 13:40:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.520 13:40:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:09.779 13:40:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.779 13:40:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:09.779 13:40:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:09.779 13:40:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:20:09.779 13:40:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.779 13:40:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:09.779 [2024-11-20 13:40:09.043770] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:09.779 13:40:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.779 13:40:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:20:09.779 13:40:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:20:09.779 13:40:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:20:09.779 13:40:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:20:09.779 13:40:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:20:09.779 13:40:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:20:09.779 13:40:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:09.779 13:40:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:09.779 13:40:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:09.779 13:40:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:09.779 13:40:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:09.779 13:40:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:09.779 13:40:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:09.779 13:40:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:09.779 13:40:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:09.779 13:40:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:09.779 13:40:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:09.779 13:40:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.779 13:40:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:09.779 13:40:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.779 13:40:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:09.779 "name": "Existed_Raid", 00:20:09.779 "uuid": "eb2bcfa6-b384-42a7-a55e-4e4dc22b8782", 00:20:09.779 "strip_size_kb": 64, 00:20:09.779 "state": "online", 00:20:09.779 "raid_level": "raid5f", 00:20:09.779 "superblock": false, 00:20:09.779 "num_base_bdevs": 3, 00:20:09.779 "num_base_bdevs_discovered": 2, 00:20:09.779 "num_base_bdevs_operational": 2, 00:20:09.779 "base_bdevs_list": [ 00:20:09.779 { 00:20:09.779 "name": null, 00:20:09.779 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:09.779 "is_configured": false, 00:20:09.779 "data_offset": 0, 00:20:09.779 "data_size": 65536 00:20:09.779 }, 00:20:09.779 { 00:20:09.779 "name": "BaseBdev2", 00:20:09.779 "uuid": "30f153ef-d769-4a2e-b785-f56b921330d5", 00:20:09.779 "is_configured": true, 00:20:09.779 "data_offset": 0, 00:20:09.779 "data_size": 65536 00:20:09.779 }, 00:20:09.779 { 00:20:09.779 "name": "BaseBdev3", 00:20:09.779 "uuid": "be64088c-c881-4b03-9acd-f3bbea168438", 00:20:09.779 "is_configured": true, 00:20:09.779 "data_offset": 0, 00:20:09.779 "data_size": 65536 00:20:09.779 } 00:20:09.779 ] 00:20:09.779 }' 00:20:09.779 13:40:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:09.779 13:40:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:10.344 13:40:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:20:10.344 13:40:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:20:10.344 13:40:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:10.344 13:40:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:20:10.344 13:40:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.344 13:40:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:10.344 13:40:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.344 13:40:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:20:10.344 13:40:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:20:10.344 13:40:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:20:10.344 13:40:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.344 13:40:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:10.344 [2024-11-20 13:40:09.655523] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:20:10.344 [2024-11-20 13:40:09.655627] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:10.344 [2024-11-20 13:40:09.751433] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:10.344 13:40:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.344 13:40:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:20:10.344 13:40:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:20:10.344 13:40:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:10.345 13:40:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.345 13:40:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:20:10.345 13:40:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:10.345 13:40:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.345 13:40:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:20:10.345 13:40:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:20:10.345 13:40:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:20:10.345 13:40:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.345 13:40:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:10.345 [2024-11-20 13:40:09.807405] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:20:10.345 [2024-11-20 13:40:09.807589] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:20:10.602 13:40:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.602 13:40:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:20:10.602 13:40:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:20:10.602 13:40:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:10.602 13:40:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:20:10.602 13:40:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.602 13:40:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:10.602 13:40:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.602 13:40:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:20:10.602 13:40:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:20:10.602 13:40:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:20:10.602 13:40:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:20:10.602 13:40:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:20:10.602 13:40:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:20:10.602 13:40:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.602 13:40:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:10.602 BaseBdev2 00:20:10.602 13:40:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.602 13:40:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:20:10.602 13:40:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:20:10.602 13:40:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:20:10.602 13:40:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:20:10.602 13:40:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:20:10.602 13:40:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:20:10.602 13:40:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:20:10.602 13:40:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.602 13:40:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:10.602 13:40:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.602 13:40:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:20:10.602 13:40:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.602 13:40:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:10.602 [ 00:20:10.602 { 00:20:10.602 "name": "BaseBdev2", 00:20:10.602 "aliases": [ 00:20:10.602 "bc9bc501-4d6b-4d90-99df-c408831222a6" 00:20:10.602 ], 00:20:10.602 "product_name": "Malloc disk", 00:20:10.602 "block_size": 512, 00:20:10.602 "num_blocks": 65536, 00:20:10.602 "uuid": "bc9bc501-4d6b-4d90-99df-c408831222a6", 00:20:10.602 "assigned_rate_limits": { 00:20:10.602 "rw_ios_per_sec": 0, 00:20:10.602 "rw_mbytes_per_sec": 0, 00:20:10.602 "r_mbytes_per_sec": 0, 00:20:10.602 "w_mbytes_per_sec": 0 00:20:10.602 }, 00:20:10.602 "claimed": false, 00:20:10.602 "zoned": false, 00:20:10.602 "supported_io_types": { 00:20:10.602 "read": true, 00:20:10.602 "write": true, 00:20:10.602 "unmap": true, 00:20:10.603 "flush": true, 00:20:10.603 "reset": true, 00:20:10.603 "nvme_admin": false, 00:20:10.603 "nvme_io": false, 00:20:10.603 "nvme_io_md": false, 00:20:10.603 "write_zeroes": true, 00:20:10.603 "zcopy": true, 00:20:10.603 "get_zone_info": false, 00:20:10.603 "zone_management": false, 00:20:10.603 "zone_append": false, 00:20:10.603 "compare": false, 00:20:10.603 "compare_and_write": false, 00:20:10.603 "abort": true, 00:20:10.603 "seek_hole": false, 00:20:10.603 "seek_data": false, 00:20:10.603 "copy": true, 00:20:10.603 "nvme_iov_md": false 00:20:10.603 }, 00:20:10.603 "memory_domains": [ 00:20:10.603 { 00:20:10.603 "dma_device_id": "system", 00:20:10.603 "dma_device_type": 1 00:20:10.603 }, 00:20:10.603 { 00:20:10.603 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:10.603 "dma_device_type": 2 00:20:10.603 } 00:20:10.603 ], 00:20:10.603 "driver_specific": {} 00:20:10.603 } 00:20:10.603 ] 00:20:10.603 13:40:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.603 13:40:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:20:10.603 13:40:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:20:10.603 13:40:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:20:10.603 13:40:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:20:10.603 13:40:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.603 13:40:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:10.603 BaseBdev3 00:20:10.603 13:40:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.603 13:40:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:20:10.603 13:40:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:20:10.861 13:40:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:20:10.861 13:40:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:20:10.861 13:40:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:20:10.861 13:40:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:20:10.861 13:40:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:20:10.861 13:40:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.861 13:40:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:10.861 13:40:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.861 13:40:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:20:10.861 13:40:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.861 13:40:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:10.861 [ 00:20:10.861 { 00:20:10.861 "name": "BaseBdev3", 00:20:10.861 "aliases": [ 00:20:10.861 "8949836c-1121-4db0-806b-99ee779c86cc" 00:20:10.861 ], 00:20:10.861 "product_name": "Malloc disk", 00:20:10.861 "block_size": 512, 00:20:10.861 "num_blocks": 65536, 00:20:10.861 "uuid": "8949836c-1121-4db0-806b-99ee779c86cc", 00:20:10.861 "assigned_rate_limits": { 00:20:10.861 "rw_ios_per_sec": 0, 00:20:10.861 "rw_mbytes_per_sec": 0, 00:20:10.861 "r_mbytes_per_sec": 0, 00:20:10.861 "w_mbytes_per_sec": 0 00:20:10.861 }, 00:20:10.861 "claimed": false, 00:20:10.861 "zoned": false, 00:20:10.861 "supported_io_types": { 00:20:10.861 "read": true, 00:20:10.861 "write": true, 00:20:10.861 "unmap": true, 00:20:10.861 "flush": true, 00:20:10.861 "reset": true, 00:20:10.861 "nvme_admin": false, 00:20:10.861 "nvme_io": false, 00:20:10.861 "nvme_io_md": false, 00:20:10.861 "write_zeroes": true, 00:20:10.861 "zcopy": true, 00:20:10.861 "get_zone_info": false, 00:20:10.861 "zone_management": false, 00:20:10.861 "zone_append": false, 00:20:10.861 "compare": false, 00:20:10.861 "compare_and_write": false, 00:20:10.861 "abort": true, 00:20:10.861 "seek_hole": false, 00:20:10.861 "seek_data": false, 00:20:10.861 "copy": true, 00:20:10.861 "nvme_iov_md": false 00:20:10.861 }, 00:20:10.861 "memory_domains": [ 00:20:10.861 { 00:20:10.861 "dma_device_id": "system", 00:20:10.861 "dma_device_type": 1 00:20:10.861 }, 00:20:10.861 { 00:20:10.861 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:10.861 "dma_device_type": 2 00:20:10.861 } 00:20:10.861 ], 00:20:10.861 "driver_specific": {} 00:20:10.861 } 00:20:10.861 ] 00:20:10.861 13:40:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.861 13:40:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:20:10.861 13:40:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:20:10.862 13:40:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:20:10.862 13:40:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:20:10.862 13:40:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.862 13:40:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:10.862 [2024-11-20 13:40:10.139767] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:10.862 [2024-11-20 13:40:10.140703] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:10.862 [2024-11-20 13:40:10.140809] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:10.862 [2024-11-20 13:40:10.142892] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:10.862 13:40:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.862 13:40:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:20:10.862 13:40:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:10.862 13:40:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:10.862 13:40:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:10.862 13:40:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:10.862 13:40:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:10.862 13:40:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:10.862 13:40:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:10.862 13:40:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:10.862 13:40:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:10.862 13:40:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:10.862 13:40:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.862 13:40:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:10.862 13:40:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:10.862 13:40:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.862 13:40:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:10.862 "name": "Existed_Raid", 00:20:10.862 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:10.862 "strip_size_kb": 64, 00:20:10.862 "state": "configuring", 00:20:10.862 "raid_level": "raid5f", 00:20:10.862 "superblock": false, 00:20:10.862 "num_base_bdevs": 3, 00:20:10.862 "num_base_bdevs_discovered": 2, 00:20:10.862 "num_base_bdevs_operational": 3, 00:20:10.862 "base_bdevs_list": [ 00:20:10.862 { 00:20:10.862 "name": "BaseBdev1", 00:20:10.862 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:10.862 "is_configured": false, 00:20:10.862 "data_offset": 0, 00:20:10.862 "data_size": 0 00:20:10.862 }, 00:20:10.862 { 00:20:10.862 "name": "BaseBdev2", 00:20:10.862 "uuid": "bc9bc501-4d6b-4d90-99df-c408831222a6", 00:20:10.862 "is_configured": true, 00:20:10.862 "data_offset": 0, 00:20:10.862 "data_size": 65536 00:20:10.862 }, 00:20:10.862 { 00:20:10.862 "name": "BaseBdev3", 00:20:10.862 "uuid": "8949836c-1121-4db0-806b-99ee779c86cc", 00:20:10.862 "is_configured": true, 00:20:10.862 "data_offset": 0, 00:20:10.862 "data_size": 65536 00:20:10.862 } 00:20:10.862 ] 00:20:10.862 }' 00:20:10.862 13:40:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:10.862 13:40:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:11.122 13:40:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:20:11.122 13:40:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.122 13:40:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:11.122 [2024-11-20 13:40:10.579212] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:20:11.122 13:40:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.122 13:40:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:20:11.122 13:40:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:11.122 13:40:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:11.122 13:40:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:11.122 13:40:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:11.122 13:40:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:11.122 13:40:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:11.122 13:40:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:11.122 13:40:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:11.122 13:40:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:11.122 13:40:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:11.122 13:40:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:11.122 13:40:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.122 13:40:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:11.382 13:40:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.382 13:40:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:11.382 "name": "Existed_Raid", 00:20:11.382 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:11.382 "strip_size_kb": 64, 00:20:11.382 "state": "configuring", 00:20:11.382 "raid_level": "raid5f", 00:20:11.382 "superblock": false, 00:20:11.382 "num_base_bdevs": 3, 00:20:11.382 "num_base_bdevs_discovered": 1, 00:20:11.382 "num_base_bdevs_operational": 3, 00:20:11.382 "base_bdevs_list": [ 00:20:11.382 { 00:20:11.382 "name": "BaseBdev1", 00:20:11.382 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:11.382 "is_configured": false, 00:20:11.382 "data_offset": 0, 00:20:11.382 "data_size": 0 00:20:11.382 }, 00:20:11.382 { 00:20:11.382 "name": null, 00:20:11.382 "uuid": "bc9bc501-4d6b-4d90-99df-c408831222a6", 00:20:11.382 "is_configured": false, 00:20:11.382 "data_offset": 0, 00:20:11.382 "data_size": 65536 00:20:11.382 }, 00:20:11.382 { 00:20:11.382 "name": "BaseBdev3", 00:20:11.382 "uuid": "8949836c-1121-4db0-806b-99ee779c86cc", 00:20:11.382 "is_configured": true, 00:20:11.382 "data_offset": 0, 00:20:11.382 "data_size": 65536 00:20:11.382 } 00:20:11.382 ] 00:20:11.382 }' 00:20:11.382 13:40:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:11.382 13:40:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:11.641 13:40:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:11.641 13:40:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.641 13:40:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:11.641 13:40:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:20:11.641 13:40:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.641 13:40:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:20:11.641 13:40:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:20:11.641 13:40:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.641 13:40:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:11.641 [2024-11-20 13:40:11.088504] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:11.641 BaseBdev1 00:20:11.641 13:40:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.641 13:40:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:20:11.641 13:40:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:20:11.641 13:40:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:20:11.641 13:40:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:20:11.641 13:40:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:20:11.641 13:40:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:20:11.641 13:40:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:20:11.641 13:40:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.641 13:40:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:11.641 13:40:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.641 13:40:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:20:11.641 13:40:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.641 13:40:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:11.641 [ 00:20:11.641 { 00:20:11.641 "name": "BaseBdev1", 00:20:11.641 "aliases": [ 00:20:11.641 "50bd69ee-fdcc-4cb9-a909-b48a97b8d6df" 00:20:11.641 ], 00:20:11.641 "product_name": "Malloc disk", 00:20:11.641 "block_size": 512, 00:20:11.641 "num_blocks": 65536, 00:20:11.641 "uuid": "50bd69ee-fdcc-4cb9-a909-b48a97b8d6df", 00:20:11.641 "assigned_rate_limits": { 00:20:11.641 "rw_ios_per_sec": 0, 00:20:11.641 "rw_mbytes_per_sec": 0, 00:20:11.641 "r_mbytes_per_sec": 0, 00:20:11.641 "w_mbytes_per_sec": 0 00:20:11.641 }, 00:20:11.641 "claimed": true, 00:20:11.641 "claim_type": "exclusive_write", 00:20:11.641 "zoned": false, 00:20:11.641 "supported_io_types": { 00:20:11.641 "read": true, 00:20:11.641 "write": true, 00:20:11.641 "unmap": true, 00:20:11.641 "flush": true, 00:20:11.911 "reset": true, 00:20:11.911 "nvme_admin": false, 00:20:11.911 "nvme_io": false, 00:20:11.911 "nvme_io_md": false, 00:20:11.911 "write_zeroes": true, 00:20:11.911 "zcopy": true, 00:20:11.911 "get_zone_info": false, 00:20:11.911 "zone_management": false, 00:20:11.911 "zone_append": false, 00:20:11.911 "compare": false, 00:20:11.911 "compare_and_write": false, 00:20:11.911 "abort": true, 00:20:11.911 "seek_hole": false, 00:20:11.911 "seek_data": false, 00:20:11.911 "copy": true, 00:20:11.911 "nvme_iov_md": false 00:20:11.911 }, 00:20:11.911 "memory_domains": [ 00:20:11.911 { 00:20:11.911 "dma_device_id": "system", 00:20:11.911 "dma_device_type": 1 00:20:11.911 }, 00:20:11.911 { 00:20:11.911 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:11.911 "dma_device_type": 2 00:20:11.911 } 00:20:11.911 ], 00:20:11.911 "driver_specific": {} 00:20:11.911 } 00:20:11.911 ] 00:20:11.911 13:40:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.911 13:40:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:20:11.911 13:40:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:20:11.911 13:40:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:11.911 13:40:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:11.911 13:40:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:11.911 13:40:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:11.911 13:40:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:11.911 13:40:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:11.911 13:40:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:11.911 13:40:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:11.911 13:40:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:11.911 13:40:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:11.911 13:40:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:11.911 13:40:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.911 13:40:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:11.912 13:40:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.912 13:40:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:11.912 "name": "Existed_Raid", 00:20:11.912 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:11.912 "strip_size_kb": 64, 00:20:11.912 "state": "configuring", 00:20:11.912 "raid_level": "raid5f", 00:20:11.912 "superblock": false, 00:20:11.912 "num_base_bdevs": 3, 00:20:11.912 "num_base_bdevs_discovered": 2, 00:20:11.912 "num_base_bdevs_operational": 3, 00:20:11.912 "base_bdevs_list": [ 00:20:11.912 { 00:20:11.912 "name": "BaseBdev1", 00:20:11.912 "uuid": "50bd69ee-fdcc-4cb9-a909-b48a97b8d6df", 00:20:11.912 "is_configured": true, 00:20:11.912 "data_offset": 0, 00:20:11.912 "data_size": 65536 00:20:11.912 }, 00:20:11.912 { 00:20:11.912 "name": null, 00:20:11.912 "uuid": "bc9bc501-4d6b-4d90-99df-c408831222a6", 00:20:11.912 "is_configured": false, 00:20:11.912 "data_offset": 0, 00:20:11.912 "data_size": 65536 00:20:11.912 }, 00:20:11.912 { 00:20:11.912 "name": "BaseBdev3", 00:20:11.912 "uuid": "8949836c-1121-4db0-806b-99ee779c86cc", 00:20:11.912 "is_configured": true, 00:20:11.912 "data_offset": 0, 00:20:11.912 "data_size": 65536 00:20:11.912 } 00:20:11.912 ] 00:20:11.912 }' 00:20:11.912 13:40:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:11.912 13:40:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:12.204 13:40:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:12.204 13:40:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:20:12.204 13:40:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.204 13:40:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:12.204 13:40:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.204 13:40:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:20:12.204 13:40:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:20:12.204 13:40:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.205 13:40:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:12.205 [2024-11-20 13:40:11.627900] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:20:12.205 13:40:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.205 13:40:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:20:12.205 13:40:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:12.205 13:40:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:12.205 13:40:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:12.205 13:40:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:12.205 13:40:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:12.205 13:40:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:12.205 13:40:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:12.205 13:40:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:12.205 13:40:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:12.205 13:40:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:12.205 13:40:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.205 13:40:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:12.205 13:40:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:12.205 13:40:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.205 13:40:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:12.205 "name": "Existed_Raid", 00:20:12.205 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:12.205 "strip_size_kb": 64, 00:20:12.205 "state": "configuring", 00:20:12.205 "raid_level": "raid5f", 00:20:12.205 "superblock": false, 00:20:12.205 "num_base_bdevs": 3, 00:20:12.205 "num_base_bdevs_discovered": 1, 00:20:12.205 "num_base_bdevs_operational": 3, 00:20:12.205 "base_bdevs_list": [ 00:20:12.205 { 00:20:12.205 "name": "BaseBdev1", 00:20:12.205 "uuid": "50bd69ee-fdcc-4cb9-a909-b48a97b8d6df", 00:20:12.205 "is_configured": true, 00:20:12.205 "data_offset": 0, 00:20:12.205 "data_size": 65536 00:20:12.205 }, 00:20:12.205 { 00:20:12.205 "name": null, 00:20:12.205 "uuid": "bc9bc501-4d6b-4d90-99df-c408831222a6", 00:20:12.205 "is_configured": false, 00:20:12.205 "data_offset": 0, 00:20:12.205 "data_size": 65536 00:20:12.205 }, 00:20:12.205 { 00:20:12.205 "name": null, 00:20:12.205 "uuid": "8949836c-1121-4db0-806b-99ee779c86cc", 00:20:12.205 "is_configured": false, 00:20:12.205 "data_offset": 0, 00:20:12.205 "data_size": 65536 00:20:12.205 } 00:20:12.205 ] 00:20:12.205 }' 00:20:12.205 13:40:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:12.205 13:40:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:12.774 13:40:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:12.774 13:40:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.774 13:40:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:12.774 13:40:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:20:12.774 13:40:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.774 13:40:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:20:12.774 13:40:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:20:12.774 13:40:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.774 13:40:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:12.774 [2024-11-20 13:40:12.119257] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:12.774 13:40:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.774 13:40:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:20:12.774 13:40:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:12.774 13:40:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:12.774 13:40:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:12.774 13:40:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:12.774 13:40:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:12.774 13:40:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:12.774 13:40:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:12.774 13:40:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:12.774 13:40:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:12.774 13:40:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:12.774 13:40:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:12.774 13:40:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.774 13:40:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:12.774 13:40:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.774 13:40:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:12.774 "name": "Existed_Raid", 00:20:12.774 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:12.774 "strip_size_kb": 64, 00:20:12.774 "state": "configuring", 00:20:12.774 "raid_level": "raid5f", 00:20:12.774 "superblock": false, 00:20:12.774 "num_base_bdevs": 3, 00:20:12.774 "num_base_bdevs_discovered": 2, 00:20:12.774 "num_base_bdevs_operational": 3, 00:20:12.774 "base_bdevs_list": [ 00:20:12.774 { 00:20:12.774 "name": "BaseBdev1", 00:20:12.774 "uuid": "50bd69ee-fdcc-4cb9-a909-b48a97b8d6df", 00:20:12.774 "is_configured": true, 00:20:12.774 "data_offset": 0, 00:20:12.774 "data_size": 65536 00:20:12.774 }, 00:20:12.774 { 00:20:12.774 "name": null, 00:20:12.774 "uuid": "bc9bc501-4d6b-4d90-99df-c408831222a6", 00:20:12.774 "is_configured": false, 00:20:12.774 "data_offset": 0, 00:20:12.774 "data_size": 65536 00:20:12.774 }, 00:20:12.774 { 00:20:12.774 "name": "BaseBdev3", 00:20:12.774 "uuid": "8949836c-1121-4db0-806b-99ee779c86cc", 00:20:12.774 "is_configured": true, 00:20:12.774 "data_offset": 0, 00:20:12.774 "data_size": 65536 00:20:12.774 } 00:20:12.774 ] 00:20:12.774 }' 00:20:12.774 13:40:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:12.774 13:40:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:13.342 13:40:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:13.342 13:40:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:20:13.342 13:40:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.342 13:40:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:13.342 13:40:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.342 13:40:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:20:13.342 13:40:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:20:13.342 13:40:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.342 13:40:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:13.342 [2024-11-20 13:40:12.606600] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:13.342 13:40:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.342 13:40:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:20:13.342 13:40:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:13.342 13:40:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:13.342 13:40:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:13.342 13:40:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:13.342 13:40:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:13.342 13:40:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:13.342 13:40:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:13.342 13:40:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:13.342 13:40:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:13.342 13:40:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:13.343 13:40:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.343 13:40:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:13.343 13:40:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:13.343 13:40:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.343 13:40:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:13.343 "name": "Existed_Raid", 00:20:13.343 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:13.343 "strip_size_kb": 64, 00:20:13.343 "state": "configuring", 00:20:13.343 "raid_level": "raid5f", 00:20:13.343 "superblock": false, 00:20:13.343 "num_base_bdevs": 3, 00:20:13.343 "num_base_bdevs_discovered": 1, 00:20:13.343 "num_base_bdevs_operational": 3, 00:20:13.343 "base_bdevs_list": [ 00:20:13.343 { 00:20:13.343 "name": null, 00:20:13.343 "uuid": "50bd69ee-fdcc-4cb9-a909-b48a97b8d6df", 00:20:13.343 "is_configured": false, 00:20:13.343 "data_offset": 0, 00:20:13.343 "data_size": 65536 00:20:13.343 }, 00:20:13.343 { 00:20:13.343 "name": null, 00:20:13.343 "uuid": "bc9bc501-4d6b-4d90-99df-c408831222a6", 00:20:13.343 "is_configured": false, 00:20:13.343 "data_offset": 0, 00:20:13.343 "data_size": 65536 00:20:13.343 }, 00:20:13.343 { 00:20:13.343 "name": "BaseBdev3", 00:20:13.343 "uuid": "8949836c-1121-4db0-806b-99ee779c86cc", 00:20:13.343 "is_configured": true, 00:20:13.343 "data_offset": 0, 00:20:13.343 "data_size": 65536 00:20:13.343 } 00:20:13.343 ] 00:20:13.343 }' 00:20:13.343 13:40:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:13.343 13:40:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:13.910 13:40:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:13.910 13:40:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:20:13.910 13:40:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.910 13:40:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:13.910 13:40:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.910 13:40:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:20:13.910 13:40:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:20:13.910 13:40:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.910 13:40:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:13.910 [2024-11-20 13:40:13.158398] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:13.910 13:40:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.910 13:40:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:20:13.910 13:40:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:13.910 13:40:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:13.910 13:40:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:13.910 13:40:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:13.910 13:40:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:13.910 13:40:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:13.910 13:40:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:13.910 13:40:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:13.910 13:40:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:13.910 13:40:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:13.910 13:40:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:13.910 13:40:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.910 13:40:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:13.910 13:40:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.910 13:40:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:13.910 "name": "Existed_Raid", 00:20:13.910 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:13.910 "strip_size_kb": 64, 00:20:13.910 "state": "configuring", 00:20:13.910 "raid_level": "raid5f", 00:20:13.910 "superblock": false, 00:20:13.911 "num_base_bdevs": 3, 00:20:13.911 "num_base_bdevs_discovered": 2, 00:20:13.911 "num_base_bdevs_operational": 3, 00:20:13.911 "base_bdevs_list": [ 00:20:13.911 { 00:20:13.911 "name": null, 00:20:13.911 "uuid": "50bd69ee-fdcc-4cb9-a909-b48a97b8d6df", 00:20:13.911 "is_configured": false, 00:20:13.911 "data_offset": 0, 00:20:13.911 "data_size": 65536 00:20:13.911 }, 00:20:13.911 { 00:20:13.911 "name": "BaseBdev2", 00:20:13.911 "uuid": "bc9bc501-4d6b-4d90-99df-c408831222a6", 00:20:13.911 "is_configured": true, 00:20:13.911 "data_offset": 0, 00:20:13.911 "data_size": 65536 00:20:13.911 }, 00:20:13.911 { 00:20:13.911 "name": "BaseBdev3", 00:20:13.911 "uuid": "8949836c-1121-4db0-806b-99ee779c86cc", 00:20:13.911 "is_configured": true, 00:20:13.911 "data_offset": 0, 00:20:13.911 "data_size": 65536 00:20:13.911 } 00:20:13.911 ] 00:20:13.911 }' 00:20:13.911 13:40:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:13.911 13:40:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:14.170 13:40:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:14.170 13:40:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:20:14.170 13:40:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.170 13:40:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:14.170 13:40:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.170 13:40:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:20:14.170 13:40:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:14.170 13:40:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:20:14.170 13:40:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.170 13:40:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:14.430 13:40:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.430 13:40:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 50bd69ee-fdcc-4cb9-a909-b48a97b8d6df 00:20:14.430 13:40:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.430 13:40:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:14.430 [2024-11-20 13:40:13.733369] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:20:14.430 [2024-11-20 13:40:13.733415] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:20:14.430 [2024-11-20 13:40:13.733427] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:20:14.430 [2024-11-20 13:40:13.733697] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:20:14.430 [2024-11-20 13:40:13.739365] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:20:14.430 [2024-11-20 13:40:13.739387] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:20:14.430 [2024-11-20 13:40:13.739639] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:14.430 NewBaseBdev 00:20:14.430 13:40:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.430 13:40:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:20:14.430 13:40:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:20:14.430 13:40:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:20:14.430 13:40:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:20:14.430 13:40:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:20:14.430 13:40:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:20:14.430 13:40:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:20:14.430 13:40:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.430 13:40:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:14.430 13:40:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.430 13:40:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:20:14.430 13:40:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.430 13:40:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:14.430 [ 00:20:14.430 { 00:20:14.430 "name": "NewBaseBdev", 00:20:14.430 "aliases": [ 00:20:14.430 "50bd69ee-fdcc-4cb9-a909-b48a97b8d6df" 00:20:14.430 ], 00:20:14.430 "product_name": "Malloc disk", 00:20:14.430 "block_size": 512, 00:20:14.430 "num_blocks": 65536, 00:20:14.430 "uuid": "50bd69ee-fdcc-4cb9-a909-b48a97b8d6df", 00:20:14.430 "assigned_rate_limits": { 00:20:14.430 "rw_ios_per_sec": 0, 00:20:14.430 "rw_mbytes_per_sec": 0, 00:20:14.430 "r_mbytes_per_sec": 0, 00:20:14.430 "w_mbytes_per_sec": 0 00:20:14.430 }, 00:20:14.430 "claimed": true, 00:20:14.430 "claim_type": "exclusive_write", 00:20:14.430 "zoned": false, 00:20:14.430 "supported_io_types": { 00:20:14.430 "read": true, 00:20:14.430 "write": true, 00:20:14.430 "unmap": true, 00:20:14.430 "flush": true, 00:20:14.430 "reset": true, 00:20:14.430 "nvme_admin": false, 00:20:14.430 "nvme_io": false, 00:20:14.430 "nvme_io_md": false, 00:20:14.430 "write_zeroes": true, 00:20:14.430 "zcopy": true, 00:20:14.430 "get_zone_info": false, 00:20:14.430 "zone_management": false, 00:20:14.430 "zone_append": false, 00:20:14.430 "compare": false, 00:20:14.430 "compare_and_write": false, 00:20:14.430 "abort": true, 00:20:14.430 "seek_hole": false, 00:20:14.430 "seek_data": false, 00:20:14.430 "copy": true, 00:20:14.430 "nvme_iov_md": false 00:20:14.430 }, 00:20:14.430 "memory_domains": [ 00:20:14.430 { 00:20:14.430 "dma_device_id": "system", 00:20:14.430 "dma_device_type": 1 00:20:14.430 }, 00:20:14.430 { 00:20:14.430 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:14.430 "dma_device_type": 2 00:20:14.430 } 00:20:14.430 ], 00:20:14.430 "driver_specific": {} 00:20:14.430 } 00:20:14.430 ] 00:20:14.430 13:40:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.430 13:40:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:20:14.430 13:40:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:20:14.430 13:40:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:14.430 13:40:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:14.430 13:40:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:14.430 13:40:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:14.430 13:40:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:14.430 13:40:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:14.430 13:40:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:14.430 13:40:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:14.430 13:40:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:14.430 13:40:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:14.430 13:40:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:14.430 13:40:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.430 13:40:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:14.430 13:40:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.431 13:40:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:14.431 "name": "Existed_Raid", 00:20:14.431 "uuid": "5fa884bc-982c-4b92-9746-6a070c7a6aab", 00:20:14.431 "strip_size_kb": 64, 00:20:14.431 "state": "online", 00:20:14.431 "raid_level": "raid5f", 00:20:14.431 "superblock": false, 00:20:14.431 "num_base_bdevs": 3, 00:20:14.431 "num_base_bdevs_discovered": 3, 00:20:14.431 "num_base_bdevs_operational": 3, 00:20:14.431 "base_bdevs_list": [ 00:20:14.431 { 00:20:14.431 "name": "NewBaseBdev", 00:20:14.431 "uuid": "50bd69ee-fdcc-4cb9-a909-b48a97b8d6df", 00:20:14.431 "is_configured": true, 00:20:14.431 "data_offset": 0, 00:20:14.431 "data_size": 65536 00:20:14.431 }, 00:20:14.431 { 00:20:14.431 "name": "BaseBdev2", 00:20:14.431 "uuid": "bc9bc501-4d6b-4d90-99df-c408831222a6", 00:20:14.431 "is_configured": true, 00:20:14.431 "data_offset": 0, 00:20:14.431 "data_size": 65536 00:20:14.431 }, 00:20:14.431 { 00:20:14.431 "name": "BaseBdev3", 00:20:14.431 "uuid": "8949836c-1121-4db0-806b-99ee779c86cc", 00:20:14.431 "is_configured": true, 00:20:14.431 "data_offset": 0, 00:20:14.431 "data_size": 65536 00:20:14.431 } 00:20:14.431 ] 00:20:14.431 }' 00:20:14.431 13:40:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:14.431 13:40:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:15.000 13:40:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:20:15.000 13:40:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:20:15.000 13:40:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:20:15.000 13:40:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:20:15.000 13:40:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:20:15.000 13:40:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:20:15.000 13:40:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:20:15.000 13:40:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:20:15.000 13:40:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.000 13:40:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:15.000 [2024-11-20 13:40:14.205870] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:15.000 13:40:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.000 13:40:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:15.000 "name": "Existed_Raid", 00:20:15.000 "aliases": [ 00:20:15.000 "5fa884bc-982c-4b92-9746-6a070c7a6aab" 00:20:15.000 ], 00:20:15.000 "product_name": "Raid Volume", 00:20:15.000 "block_size": 512, 00:20:15.000 "num_blocks": 131072, 00:20:15.000 "uuid": "5fa884bc-982c-4b92-9746-6a070c7a6aab", 00:20:15.000 "assigned_rate_limits": { 00:20:15.000 "rw_ios_per_sec": 0, 00:20:15.000 "rw_mbytes_per_sec": 0, 00:20:15.000 "r_mbytes_per_sec": 0, 00:20:15.000 "w_mbytes_per_sec": 0 00:20:15.000 }, 00:20:15.000 "claimed": false, 00:20:15.000 "zoned": false, 00:20:15.000 "supported_io_types": { 00:20:15.000 "read": true, 00:20:15.000 "write": true, 00:20:15.000 "unmap": false, 00:20:15.000 "flush": false, 00:20:15.000 "reset": true, 00:20:15.000 "nvme_admin": false, 00:20:15.000 "nvme_io": false, 00:20:15.000 "nvme_io_md": false, 00:20:15.000 "write_zeroes": true, 00:20:15.000 "zcopy": false, 00:20:15.000 "get_zone_info": false, 00:20:15.000 "zone_management": false, 00:20:15.000 "zone_append": false, 00:20:15.000 "compare": false, 00:20:15.000 "compare_and_write": false, 00:20:15.000 "abort": false, 00:20:15.000 "seek_hole": false, 00:20:15.000 "seek_data": false, 00:20:15.000 "copy": false, 00:20:15.000 "nvme_iov_md": false 00:20:15.000 }, 00:20:15.000 "driver_specific": { 00:20:15.000 "raid": { 00:20:15.000 "uuid": "5fa884bc-982c-4b92-9746-6a070c7a6aab", 00:20:15.000 "strip_size_kb": 64, 00:20:15.000 "state": "online", 00:20:15.000 "raid_level": "raid5f", 00:20:15.000 "superblock": false, 00:20:15.000 "num_base_bdevs": 3, 00:20:15.000 "num_base_bdevs_discovered": 3, 00:20:15.000 "num_base_bdevs_operational": 3, 00:20:15.000 "base_bdevs_list": [ 00:20:15.000 { 00:20:15.000 "name": "NewBaseBdev", 00:20:15.000 "uuid": "50bd69ee-fdcc-4cb9-a909-b48a97b8d6df", 00:20:15.000 "is_configured": true, 00:20:15.000 "data_offset": 0, 00:20:15.000 "data_size": 65536 00:20:15.000 }, 00:20:15.000 { 00:20:15.000 "name": "BaseBdev2", 00:20:15.000 "uuid": "bc9bc501-4d6b-4d90-99df-c408831222a6", 00:20:15.000 "is_configured": true, 00:20:15.000 "data_offset": 0, 00:20:15.000 "data_size": 65536 00:20:15.000 }, 00:20:15.000 { 00:20:15.000 "name": "BaseBdev3", 00:20:15.000 "uuid": "8949836c-1121-4db0-806b-99ee779c86cc", 00:20:15.000 "is_configured": true, 00:20:15.000 "data_offset": 0, 00:20:15.000 "data_size": 65536 00:20:15.000 } 00:20:15.000 ] 00:20:15.000 } 00:20:15.000 } 00:20:15.000 }' 00:20:15.000 13:40:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:15.000 13:40:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:20:15.000 BaseBdev2 00:20:15.000 BaseBdev3' 00:20:15.000 13:40:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:15.000 13:40:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:20:15.000 13:40:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:15.000 13:40:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:20:15.000 13:40:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.000 13:40:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:15.000 13:40:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:15.000 13:40:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.000 13:40:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:15.000 13:40:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:15.001 13:40:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:15.001 13:40:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:20:15.001 13:40:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.001 13:40:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:15.001 13:40:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:15.001 13:40:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.001 13:40:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:15.001 13:40:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:15.001 13:40:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:15.001 13:40:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:20:15.001 13:40:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:15.001 13:40:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.001 13:40:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:15.001 13:40:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.260 13:40:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:15.260 13:40:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:15.260 13:40:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:20:15.260 13:40:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.260 13:40:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:15.260 [2024-11-20 13:40:14.493281] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:15.260 [2024-11-20 13:40:14.493310] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:15.260 [2024-11-20 13:40:14.493393] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:15.260 [2024-11-20 13:40:14.493666] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:15.260 [2024-11-20 13:40:14.493682] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:20:15.260 13:40:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.260 13:40:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 79688 00:20:15.260 13:40:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 79688 ']' 00:20:15.260 13:40:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@958 -- # kill -0 79688 00:20:15.260 13:40:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # uname 00:20:15.260 13:40:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:15.260 13:40:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79688 00:20:15.260 13:40:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:15.260 killing process with pid 79688 00:20:15.260 13:40:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:15.260 13:40:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79688' 00:20:15.260 13:40:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@973 -- # kill 79688 00:20:15.260 [2024-11-20 13:40:14.541245] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:15.260 13:40:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@978 -- # wait 79688 00:20:15.519 [2024-11-20 13:40:14.844477] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:16.899 13:40:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:20:16.899 00:20:16.899 real 0m10.932s 00:20:16.899 user 0m17.248s 00:20:16.899 sys 0m2.357s 00:20:16.899 13:40:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:16.899 13:40:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:16.899 ************************************ 00:20:16.899 END TEST raid5f_state_function_test 00:20:16.899 ************************************ 00:20:16.899 13:40:16 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 3 true 00:20:16.899 13:40:16 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:20:16.899 13:40:16 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:16.899 13:40:16 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:16.899 ************************************ 00:20:16.899 START TEST raid5f_state_function_test_sb 00:20:16.899 ************************************ 00:20:16.899 13:40:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 3 true 00:20:16.899 13:40:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:20:16.899 13:40:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:20:16.899 13:40:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:20:16.899 13:40:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:20:16.899 13:40:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:20:16.899 13:40:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:20:16.899 13:40:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:20:16.899 13:40:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:20:16.899 13:40:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:20:16.899 13:40:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:20:16.899 13:40:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:20:16.899 13:40:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:20:16.899 13:40:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:20:16.899 13:40:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:20:16.899 13:40:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:20:16.899 13:40:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:20:16.899 13:40:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:20:16.899 13:40:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:20:16.899 13:40:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:20:16.899 13:40:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:20:16.899 13:40:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:20:16.899 13:40:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:20:16.899 13:40:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:20:16.899 13:40:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:20:16.899 13:40:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:20:16.899 13:40:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:20:16.899 13:40:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=80315 00:20:16.899 13:40:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:20:16.899 13:40:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 80315' 00:20:16.899 Process raid pid: 80315 00:20:16.899 13:40:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 80315 00:20:16.899 13:40:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 80315 ']' 00:20:16.899 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:16.900 13:40:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:16.900 13:40:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:16.900 13:40:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:16.900 13:40:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:16.900 13:40:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:16.900 [2024-11-20 13:40:16.194134] Starting SPDK v25.01-pre git sha1 82b85d9ca / DPDK 24.03.0 initialization... 00:20:16.900 [2024-11-20 13:40:16.194326] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:16.900 [2024-11-20 13:40:16.382355] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:17.183 [2024-11-20 13:40:16.507980] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:17.443 [2024-11-20 13:40:16.735623] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:17.443 [2024-11-20 13:40:16.735841] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:17.702 13:40:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:17.702 13:40:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:20:17.702 13:40:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:20:17.702 13:40:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.702 13:40:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:17.702 [2024-11-20 13:40:17.085439] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:17.702 [2024-11-20 13:40:17.085492] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:17.702 [2024-11-20 13:40:17.085505] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:17.702 [2024-11-20 13:40:17.085518] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:17.702 [2024-11-20 13:40:17.085549] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:17.702 [2024-11-20 13:40:17.085563] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:17.702 13:40:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.702 13:40:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:20:17.702 13:40:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:17.702 13:40:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:17.702 13:40:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:17.702 13:40:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:17.702 13:40:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:17.702 13:40:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:17.702 13:40:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:17.702 13:40:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:17.702 13:40:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:17.702 13:40:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:17.702 13:40:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:17.702 13:40:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.702 13:40:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:17.702 13:40:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.702 13:40:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:17.702 "name": "Existed_Raid", 00:20:17.702 "uuid": "f603c6ca-cab5-4b99-9962-689d7e8ca40d", 00:20:17.702 "strip_size_kb": 64, 00:20:17.702 "state": "configuring", 00:20:17.702 "raid_level": "raid5f", 00:20:17.702 "superblock": true, 00:20:17.702 "num_base_bdevs": 3, 00:20:17.702 "num_base_bdevs_discovered": 0, 00:20:17.702 "num_base_bdevs_operational": 3, 00:20:17.702 "base_bdevs_list": [ 00:20:17.702 { 00:20:17.702 "name": "BaseBdev1", 00:20:17.702 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:17.702 "is_configured": false, 00:20:17.702 "data_offset": 0, 00:20:17.702 "data_size": 0 00:20:17.702 }, 00:20:17.702 { 00:20:17.702 "name": "BaseBdev2", 00:20:17.702 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:17.702 "is_configured": false, 00:20:17.702 "data_offset": 0, 00:20:17.702 "data_size": 0 00:20:17.702 }, 00:20:17.702 { 00:20:17.702 "name": "BaseBdev3", 00:20:17.702 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:17.702 "is_configured": false, 00:20:17.702 "data_offset": 0, 00:20:17.702 "data_size": 0 00:20:17.702 } 00:20:17.702 ] 00:20:17.702 }' 00:20:17.702 13:40:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:17.702 13:40:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:18.270 13:40:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:20:18.270 13:40:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.270 13:40:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:18.270 [2024-11-20 13:40:17.544855] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:18.270 [2024-11-20 13:40:17.544896] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:20:18.270 13:40:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.270 13:40:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:20:18.270 13:40:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.270 13:40:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:18.270 [2024-11-20 13:40:17.556857] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:18.270 [2024-11-20 13:40:17.556927] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:18.270 [2024-11-20 13:40:17.556939] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:18.270 [2024-11-20 13:40:17.556953] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:18.270 [2024-11-20 13:40:17.556961] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:18.270 [2024-11-20 13:40:17.556974] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:18.270 13:40:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.271 13:40:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:20:18.271 13:40:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.271 13:40:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:18.271 [2024-11-20 13:40:17.604102] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:18.271 BaseBdev1 00:20:18.271 13:40:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.271 13:40:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:20:18.271 13:40:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:20:18.271 13:40:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:20:18.271 13:40:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:20:18.271 13:40:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:20:18.271 13:40:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:20:18.271 13:40:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:20:18.271 13:40:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.271 13:40:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:18.271 13:40:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.271 13:40:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:20:18.271 13:40:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.271 13:40:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:18.271 [ 00:20:18.271 { 00:20:18.271 "name": "BaseBdev1", 00:20:18.271 "aliases": [ 00:20:18.271 "256ee7e7-e105-45ee-a0d5-1151fc19cbe7" 00:20:18.271 ], 00:20:18.271 "product_name": "Malloc disk", 00:20:18.271 "block_size": 512, 00:20:18.271 "num_blocks": 65536, 00:20:18.271 "uuid": "256ee7e7-e105-45ee-a0d5-1151fc19cbe7", 00:20:18.271 "assigned_rate_limits": { 00:20:18.271 "rw_ios_per_sec": 0, 00:20:18.271 "rw_mbytes_per_sec": 0, 00:20:18.271 "r_mbytes_per_sec": 0, 00:20:18.271 "w_mbytes_per_sec": 0 00:20:18.271 }, 00:20:18.271 "claimed": true, 00:20:18.271 "claim_type": "exclusive_write", 00:20:18.271 "zoned": false, 00:20:18.271 "supported_io_types": { 00:20:18.271 "read": true, 00:20:18.271 "write": true, 00:20:18.271 "unmap": true, 00:20:18.271 "flush": true, 00:20:18.271 "reset": true, 00:20:18.271 "nvme_admin": false, 00:20:18.271 "nvme_io": false, 00:20:18.271 "nvme_io_md": false, 00:20:18.271 "write_zeroes": true, 00:20:18.271 "zcopy": true, 00:20:18.271 "get_zone_info": false, 00:20:18.271 "zone_management": false, 00:20:18.271 "zone_append": false, 00:20:18.271 "compare": false, 00:20:18.271 "compare_and_write": false, 00:20:18.271 "abort": true, 00:20:18.271 "seek_hole": false, 00:20:18.271 "seek_data": false, 00:20:18.271 "copy": true, 00:20:18.271 "nvme_iov_md": false 00:20:18.271 }, 00:20:18.271 "memory_domains": [ 00:20:18.271 { 00:20:18.271 "dma_device_id": "system", 00:20:18.271 "dma_device_type": 1 00:20:18.271 }, 00:20:18.271 { 00:20:18.271 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:18.271 "dma_device_type": 2 00:20:18.271 } 00:20:18.271 ], 00:20:18.271 "driver_specific": {} 00:20:18.271 } 00:20:18.271 ] 00:20:18.271 13:40:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.271 13:40:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:20:18.271 13:40:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:20:18.271 13:40:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:18.271 13:40:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:18.271 13:40:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:18.271 13:40:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:18.271 13:40:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:18.271 13:40:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:18.271 13:40:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:18.271 13:40:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:18.271 13:40:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:18.271 13:40:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:18.271 13:40:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:18.271 13:40:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.271 13:40:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:18.271 13:40:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.271 13:40:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:18.271 "name": "Existed_Raid", 00:20:18.271 "uuid": "9b74d596-1432-48d2-a1f8-7dc7fcf77d57", 00:20:18.271 "strip_size_kb": 64, 00:20:18.271 "state": "configuring", 00:20:18.271 "raid_level": "raid5f", 00:20:18.271 "superblock": true, 00:20:18.271 "num_base_bdevs": 3, 00:20:18.271 "num_base_bdevs_discovered": 1, 00:20:18.271 "num_base_bdevs_operational": 3, 00:20:18.271 "base_bdevs_list": [ 00:20:18.271 { 00:20:18.271 "name": "BaseBdev1", 00:20:18.271 "uuid": "256ee7e7-e105-45ee-a0d5-1151fc19cbe7", 00:20:18.271 "is_configured": true, 00:20:18.271 "data_offset": 2048, 00:20:18.271 "data_size": 63488 00:20:18.271 }, 00:20:18.271 { 00:20:18.271 "name": "BaseBdev2", 00:20:18.271 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:18.271 "is_configured": false, 00:20:18.271 "data_offset": 0, 00:20:18.271 "data_size": 0 00:20:18.271 }, 00:20:18.271 { 00:20:18.271 "name": "BaseBdev3", 00:20:18.271 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:18.271 "is_configured": false, 00:20:18.271 "data_offset": 0, 00:20:18.271 "data_size": 0 00:20:18.271 } 00:20:18.271 ] 00:20:18.271 }' 00:20:18.271 13:40:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:18.271 13:40:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:18.838 13:40:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:20:18.838 13:40:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.838 13:40:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:18.838 [2024-11-20 13:40:18.075503] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:18.838 [2024-11-20 13:40:18.075712] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:20:18.838 13:40:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.838 13:40:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:20:18.838 13:40:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.838 13:40:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:18.838 [2024-11-20 13:40:18.087555] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:18.838 [2024-11-20 13:40:18.089756] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:18.838 [2024-11-20 13:40:18.089807] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:18.838 [2024-11-20 13:40:18.089819] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:18.838 [2024-11-20 13:40:18.089832] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:18.838 13:40:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.838 13:40:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:20:18.838 13:40:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:20:18.838 13:40:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:20:18.838 13:40:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:18.838 13:40:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:18.838 13:40:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:18.838 13:40:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:18.838 13:40:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:18.838 13:40:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:18.838 13:40:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:18.838 13:40:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:18.839 13:40:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:18.839 13:40:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:18.839 13:40:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:18.839 13:40:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.839 13:40:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:18.839 13:40:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.839 13:40:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:18.839 "name": "Existed_Raid", 00:20:18.839 "uuid": "9697d773-c513-4175-86da-cde8e127bc72", 00:20:18.839 "strip_size_kb": 64, 00:20:18.839 "state": "configuring", 00:20:18.839 "raid_level": "raid5f", 00:20:18.839 "superblock": true, 00:20:18.839 "num_base_bdevs": 3, 00:20:18.839 "num_base_bdevs_discovered": 1, 00:20:18.839 "num_base_bdevs_operational": 3, 00:20:18.839 "base_bdevs_list": [ 00:20:18.839 { 00:20:18.839 "name": "BaseBdev1", 00:20:18.839 "uuid": "256ee7e7-e105-45ee-a0d5-1151fc19cbe7", 00:20:18.839 "is_configured": true, 00:20:18.839 "data_offset": 2048, 00:20:18.839 "data_size": 63488 00:20:18.839 }, 00:20:18.839 { 00:20:18.839 "name": "BaseBdev2", 00:20:18.839 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:18.839 "is_configured": false, 00:20:18.839 "data_offset": 0, 00:20:18.839 "data_size": 0 00:20:18.839 }, 00:20:18.839 { 00:20:18.839 "name": "BaseBdev3", 00:20:18.839 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:18.839 "is_configured": false, 00:20:18.839 "data_offset": 0, 00:20:18.839 "data_size": 0 00:20:18.839 } 00:20:18.839 ] 00:20:18.839 }' 00:20:18.839 13:40:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:18.839 13:40:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:19.098 13:40:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:20:19.098 13:40:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.098 13:40:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:19.098 [2024-11-20 13:40:18.559818] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:19.098 BaseBdev2 00:20:19.098 13:40:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.098 13:40:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:20:19.098 13:40:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:20:19.098 13:40:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:20:19.098 13:40:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:20:19.098 13:40:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:20:19.098 13:40:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:20:19.098 13:40:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:20:19.098 13:40:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.098 13:40:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:19.098 13:40:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.098 13:40:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:20:19.098 13:40:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.098 13:40:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:19.357 [ 00:20:19.357 { 00:20:19.357 "name": "BaseBdev2", 00:20:19.357 "aliases": [ 00:20:19.357 "3fe6403b-a63c-48fe-90c8-49a21f95bcb3" 00:20:19.357 ], 00:20:19.357 "product_name": "Malloc disk", 00:20:19.357 "block_size": 512, 00:20:19.357 "num_blocks": 65536, 00:20:19.357 "uuid": "3fe6403b-a63c-48fe-90c8-49a21f95bcb3", 00:20:19.357 "assigned_rate_limits": { 00:20:19.357 "rw_ios_per_sec": 0, 00:20:19.357 "rw_mbytes_per_sec": 0, 00:20:19.357 "r_mbytes_per_sec": 0, 00:20:19.357 "w_mbytes_per_sec": 0 00:20:19.357 }, 00:20:19.357 "claimed": true, 00:20:19.357 "claim_type": "exclusive_write", 00:20:19.357 "zoned": false, 00:20:19.357 "supported_io_types": { 00:20:19.357 "read": true, 00:20:19.357 "write": true, 00:20:19.357 "unmap": true, 00:20:19.357 "flush": true, 00:20:19.357 "reset": true, 00:20:19.357 "nvme_admin": false, 00:20:19.357 "nvme_io": false, 00:20:19.357 "nvme_io_md": false, 00:20:19.357 "write_zeroes": true, 00:20:19.357 "zcopy": true, 00:20:19.357 "get_zone_info": false, 00:20:19.357 "zone_management": false, 00:20:19.357 "zone_append": false, 00:20:19.357 "compare": false, 00:20:19.357 "compare_and_write": false, 00:20:19.357 "abort": true, 00:20:19.357 "seek_hole": false, 00:20:19.357 "seek_data": false, 00:20:19.357 "copy": true, 00:20:19.357 "nvme_iov_md": false 00:20:19.357 }, 00:20:19.357 "memory_domains": [ 00:20:19.357 { 00:20:19.357 "dma_device_id": "system", 00:20:19.357 "dma_device_type": 1 00:20:19.357 }, 00:20:19.357 { 00:20:19.357 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:19.357 "dma_device_type": 2 00:20:19.357 } 00:20:19.357 ], 00:20:19.357 "driver_specific": {} 00:20:19.357 } 00:20:19.357 ] 00:20:19.357 13:40:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.357 13:40:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:20:19.357 13:40:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:20:19.357 13:40:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:20:19.357 13:40:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:20:19.357 13:40:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:19.357 13:40:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:19.357 13:40:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:19.357 13:40:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:19.357 13:40:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:19.357 13:40:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:19.357 13:40:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:19.357 13:40:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:19.357 13:40:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:19.357 13:40:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:19.357 13:40:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:19.357 13:40:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.357 13:40:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:19.357 13:40:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.357 13:40:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:19.357 "name": "Existed_Raid", 00:20:19.357 "uuid": "9697d773-c513-4175-86da-cde8e127bc72", 00:20:19.357 "strip_size_kb": 64, 00:20:19.357 "state": "configuring", 00:20:19.357 "raid_level": "raid5f", 00:20:19.357 "superblock": true, 00:20:19.357 "num_base_bdevs": 3, 00:20:19.357 "num_base_bdevs_discovered": 2, 00:20:19.357 "num_base_bdevs_operational": 3, 00:20:19.357 "base_bdevs_list": [ 00:20:19.357 { 00:20:19.357 "name": "BaseBdev1", 00:20:19.357 "uuid": "256ee7e7-e105-45ee-a0d5-1151fc19cbe7", 00:20:19.357 "is_configured": true, 00:20:19.358 "data_offset": 2048, 00:20:19.358 "data_size": 63488 00:20:19.358 }, 00:20:19.358 { 00:20:19.358 "name": "BaseBdev2", 00:20:19.358 "uuid": "3fe6403b-a63c-48fe-90c8-49a21f95bcb3", 00:20:19.358 "is_configured": true, 00:20:19.358 "data_offset": 2048, 00:20:19.358 "data_size": 63488 00:20:19.358 }, 00:20:19.358 { 00:20:19.358 "name": "BaseBdev3", 00:20:19.358 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:19.358 "is_configured": false, 00:20:19.358 "data_offset": 0, 00:20:19.358 "data_size": 0 00:20:19.358 } 00:20:19.358 ] 00:20:19.358 }' 00:20:19.358 13:40:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:19.358 13:40:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:19.616 13:40:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:20:19.616 13:40:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.616 13:40:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:19.616 [2024-11-20 13:40:19.099903] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:19.616 [2024-11-20 13:40:19.100201] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:20:19.616 [2024-11-20 13:40:19.100226] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:20:19.616 [2024-11-20 13:40:19.100525] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:20:19.873 BaseBdev3 00:20:19.873 13:40:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.873 13:40:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:20:19.873 13:40:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:20:19.873 13:40:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:20:19.873 13:40:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:20:19.873 13:40:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:20:19.873 13:40:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:20:19.873 13:40:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:20:19.873 13:40:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.873 13:40:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:19.873 [2024-11-20 13:40:19.106993] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:20:19.873 [2024-11-20 13:40:19.107174] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:20:19.873 [2024-11-20 13:40:19.107634] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:19.873 13:40:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.873 13:40:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:20:19.873 13:40:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.873 13:40:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:19.873 [ 00:20:19.873 { 00:20:19.873 "name": "BaseBdev3", 00:20:19.873 "aliases": [ 00:20:19.873 "f0f6acfe-581a-4831-aa95-de1de84204ad" 00:20:19.873 ], 00:20:19.873 "product_name": "Malloc disk", 00:20:19.873 "block_size": 512, 00:20:19.873 "num_blocks": 65536, 00:20:19.873 "uuid": "f0f6acfe-581a-4831-aa95-de1de84204ad", 00:20:19.873 "assigned_rate_limits": { 00:20:19.873 "rw_ios_per_sec": 0, 00:20:19.873 "rw_mbytes_per_sec": 0, 00:20:19.873 "r_mbytes_per_sec": 0, 00:20:19.873 "w_mbytes_per_sec": 0 00:20:19.873 }, 00:20:19.873 "claimed": true, 00:20:19.873 "claim_type": "exclusive_write", 00:20:19.873 "zoned": false, 00:20:19.873 "supported_io_types": { 00:20:19.873 "read": true, 00:20:19.873 "write": true, 00:20:19.873 "unmap": true, 00:20:19.873 "flush": true, 00:20:19.873 "reset": true, 00:20:19.873 "nvme_admin": false, 00:20:19.873 "nvme_io": false, 00:20:19.873 "nvme_io_md": false, 00:20:19.873 "write_zeroes": true, 00:20:19.873 "zcopy": true, 00:20:19.873 "get_zone_info": false, 00:20:19.873 "zone_management": false, 00:20:19.873 "zone_append": false, 00:20:19.873 "compare": false, 00:20:19.873 "compare_and_write": false, 00:20:19.873 "abort": true, 00:20:19.873 "seek_hole": false, 00:20:19.873 "seek_data": false, 00:20:19.873 "copy": true, 00:20:19.873 "nvme_iov_md": false 00:20:19.873 }, 00:20:19.873 "memory_domains": [ 00:20:19.873 { 00:20:19.873 "dma_device_id": "system", 00:20:19.873 "dma_device_type": 1 00:20:19.873 }, 00:20:19.873 { 00:20:19.873 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:19.873 "dma_device_type": 2 00:20:19.873 } 00:20:19.873 ], 00:20:19.873 "driver_specific": {} 00:20:19.873 } 00:20:19.873 ] 00:20:19.873 13:40:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.873 13:40:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:20:19.873 13:40:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:20:19.873 13:40:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:20:19.873 13:40:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:20:19.873 13:40:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:19.873 13:40:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:19.873 13:40:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:19.873 13:40:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:19.873 13:40:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:19.873 13:40:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:19.873 13:40:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:19.873 13:40:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:19.873 13:40:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:19.873 13:40:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:19.873 13:40:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.873 13:40:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:19.873 13:40:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:19.873 13:40:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.873 13:40:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:19.873 "name": "Existed_Raid", 00:20:19.873 "uuid": "9697d773-c513-4175-86da-cde8e127bc72", 00:20:19.873 "strip_size_kb": 64, 00:20:19.873 "state": "online", 00:20:19.873 "raid_level": "raid5f", 00:20:19.873 "superblock": true, 00:20:19.873 "num_base_bdevs": 3, 00:20:19.873 "num_base_bdevs_discovered": 3, 00:20:19.873 "num_base_bdevs_operational": 3, 00:20:19.873 "base_bdevs_list": [ 00:20:19.873 { 00:20:19.873 "name": "BaseBdev1", 00:20:19.873 "uuid": "256ee7e7-e105-45ee-a0d5-1151fc19cbe7", 00:20:19.873 "is_configured": true, 00:20:19.873 "data_offset": 2048, 00:20:19.873 "data_size": 63488 00:20:19.873 }, 00:20:19.873 { 00:20:19.873 "name": "BaseBdev2", 00:20:19.873 "uuid": "3fe6403b-a63c-48fe-90c8-49a21f95bcb3", 00:20:19.873 "is_configured": true, 00:20:19.873 "data_offset": 2048, 00:20:19.873 "data_size": 63488 00:20:19.873 }, 00:20:19.874 { 00:20:19.874 "name": "BaseBdev3", 00:20:19.874 "uuid": "f0f6acfe-581a-4831-aa95-de1de84204ad", 00:20:19.874 "is_configured": true, 00:20:19.874 "data_offset": 2048, 00:20:19.874 "data_size": 63488 00:20:19.874 } 00:20:19.874 ] 00:20:19.874 }' 00:20:19.874 13:40:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:19.874 13:40:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:20.132 13:40:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:20:20.132 13:40:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:20:20.132 13:40:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:20:20.132 13:40:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:20:20.132 13:40:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:20:20.132 13:40:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:20:20.132 13:40:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:20:20.132 13:40:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:20:20.132 13:40:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:20.132 13:40:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:20.132 [2024-11-20 13:40:19.614612] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:20.392 13:40:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:20.392 13:40:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:20.392 "name": "Existed_Raid", 00:20:20.392 "aliases": [ 00:20:20.392 "9697d773-c513-4175-86da-cde8e127bc72" 00:20:20.392 ], 00:20:20.392 "product_name": "Raid Volume", 00:20:20.392 "block_size": 512, 00:20:20.392 "num_blocks": 126976, 00:20:20.392 "uuid": "9697d773-c513-4175-86da-cde8e127bc72", 00:20:20.392 "assigned_rate_limits": { 00:20:20.392 "rw_ios_per_sec": 0, 00:20:20.392 "rw_mbytes_per_sec": 0, 00:20:20.392 "r_mbytes_per_sec": 0, 00:20:20.392 "w_mbytes_per_sec": 0 00:20:20.392 }, 00:20:20.392 "claimed": false, 00:20:20.392 "zoned": false, 00:20:20.392 "supported_io_types": { 00:20:20.392 "read": true, 00:20:20.392 "write": true, 00:20:20.392 "unmap": false, 00:20:20.392 "flush": false, 00:20:20.392 "reset": true, 00:20:20.392 "nvme_admin": false, 00:20:20.392 "nvme_io": false, 00:20:20.392 "nvme_io_md": false, 00:20:20.392 "write_zeroes": true, 00:20:20.392 "zcopy": false, 00:20:20.392 "get_zone_info": false, 00:20:20.392 "zone_management": false, 00:20:20.392 "zone_append": false, 00:20:20.392 "compare": false, 00:20:20.392 "compare_and_write": false, 00:20:20.392 "abort": false, 00:20:20.392 "seek_hole": false, 00:20:20.392 "seek_data": false, 00:20:20.392 "copy": false, 00:20:20.392 "nvme_iov_md": false 00:20:20.392 }, 00:20:20.392 "driver_specific": { 00:20:20.392 "raid": { 00:20:20.392 "uuid": "9697d773-c513-4175-86da-cde8e127bc72", 00:20:20.392 "strip_size_kb": 64, 00:20:20.392 "state": "online", 00:20:20.392 "raid_level": "raid5f", 00:20:20.392 "superblock": true, 00:20:20.392 "num_base_bdevs": 3, 00:20:20.392 "num_base_bdevs_discovered": 3, 00:20:20.392 "num_base_bdevs_operational": 3, 00:20:20.392 "base_bdevs_list": [ 00:20:20.392 { 00:20:20.392 "name": "BaseBdev1", 00:20:20.392 "uuid": "256ee7e7-e105-45ee-a0d5-1151fc19cbe7", 00:20:20.392 "is_configured": true, 00:20:20.392 "data_offset": 2048, 00:20:20.392 "data_size": 63488 00:20:20.392 }, 00:20:20.392 { 00:20:20.392 "name": "BaseBdev2", 00:20:20.392 "uuid": "3fe6403b-a63c-48fe-90c8-49a21f95bcb3", 00:20:20.392 "is_configured": true, 00:20:20.392 "data_offset": 2048, 00:20:20.392 "data_size": 63488 00:20:20.392 }, 00:20:20.392 { 00:20:20.392 "name": "BaseBdev3", 00:20:20.392 "uuid": "f0f6acfe-581a-4831-aa95-de1de84204ad", 00:20:20.392 "is_configured": true, 00:20:20.392 "data_offset": 2048, 00:20:20.392 "data_size": 63488 00:20:20.392 } 00:20:20.392 ] 00:20:20.392 } 00:20:20.392 } 00:20:20.392 }' 00:20:20.392 13:40:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:20.392 13:40:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:20:20.392 BaseBdev2 00:20:20.392 BaseBdev3' 00:20:20.392 13:40:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:20.392 13:40:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:20:20.392 13:40:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:20.392 13:40:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:20.392 13:40:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:20:20.392 13:40:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:20.392 13:40:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:20.392 13:40:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:20.392 13:40:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:20.392 13:40:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:20.392 13:40:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:20.392 13:40:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:20:20.392 13:40:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:20.392 13:40:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:20.392 13:40:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:20.392 13:40:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:20.392 13:40:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:20.392 13:40:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:20.392 13:40:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:20.392 13:40:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:20:20.392 13:40:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:20.392 13:40:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:20.392 13:40:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:20.392 13:40:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:20.393 13:40:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:20.393 13:40:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:20.393 13:40:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:20:20.393 13:40:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:20.393 13:40:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:20.393 [2024-11-20 13:40:19.870452] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:20.653 13:40:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:20.653 13:40:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:20:20.653 13:40:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:20:20.653 13:40:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:20:20.653 13:40:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:20:20.653 13:40:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:20:20.653 13:40:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:20:20.653 13:40:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:20.653 13:40:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:20.653 13:40:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:20.653 13:40:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:20.653 13:40:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:20.653 13:40:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:20.653 13:40:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:20.653 13:40:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:20.653 13:40:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:20.653 13:40:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:20.653 13:40:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:20.653 13:40:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:20.653 13:40:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:20.653 13:40:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:20.653 13:40:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:20.653 "name": "Existed_Raid", 00:20:20.653 "uuid": "9697d773-c513-4175-86da-cde8e127bc72", 00:20:20.653 "strip_size_kb": 64, 00:20:20.653 "state": "online", 00:20:20.654 "raid_level": "raid5f", 00:20:20.654 "superblock": true, 00:20:20.654 "num_base_bdevs": 3, 00:20:20.654 "num_base_bdevs_discovered": 2, 00:20:20.654 "num_base_bdevs_operational": 2, 00:20:20.654 "base_bdevs_list": [ 00:20:20.654 { 00:20:20.654 "name": null, 00:20:20.654 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:20.654 "is_configured": false, 00:20:20.654 "data_offset": 0, 00:20:20.654 "data_size": 63488 00:20:20.654 }, 00:20:20.654 { 00:20:20.654 "name": "BaseBdev2", 00:20:20.654 "uuid": "3fe6403b-a63c-48fe-90c8-49a21f95bcb3", 00:20:20.654 "is_configured": true, 00:20:20.654 "data_offset": 2048, 00:20:20.654 "data_size": 63488 00:20:20.654 }, 00:20:20.654 { 00:20:20.654 "name": "BaseBdev3", 00:20:20.654 "uuid": "f0f6acfe-581a-4831-aa95-de1de84204ad", 00:20:20.654 "is_configured": true, 00:20:20.654 "data_offset": 2048, 00:20:20.654 "data_size": 63488 00:20:20.654 } 00:20:20.654 ] 00:20:20.654 }' 00:20:20.654 13:40:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:20.654 13:40:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:20.913 13:40:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:20:20.913 13:40:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:20:21.172 13:40:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:20:21.172 13:40:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:21.172 13:40:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.172 13:40:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:21.172 13:40:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.172 13:40:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:20:21.172 13:40:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:20:21.172 13:40:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:20:21.172 13:40:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.172 13:40:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:21.172 [2024-11-20 13:40:20.442473] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:20:21.172 [2024-11-20 13:40:20.442623] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:21.172 [2024-11-20 13:40:20.547317] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:21.172 13:40:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.172 13:40:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:20:21.172 13:40:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:20:21.172 13:40:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:21.172 13:40:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.172 13:40:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:21.172 13:40:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:20:21.172 13:40:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.172 13:40:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:20:21.172 13:40:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:20:21.172 13:40:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:20:21.172 13:40:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.172 13:40:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:21.172 [2024-11-20 13:40:20.603309] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:20:21.172 [2024-11-20 13:40:20.603492] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:20:21.431 13:40:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.431 13:40:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:20:21.431 13:40:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:20:21.431 13:40:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:21.431 13:40:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:20:21.431 13:40:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.431 13:40:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:21.431 13:40:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.431 13:40:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:20:21.431 13:40:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:20:21.431 13:40:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:20:21.431 13:40:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:20:21.431 13:40:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:20:21.432 13:40:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:20:21.432 13:40:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.432 13:40:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:21.432 BaseBdev2 00:20:21.432 13:40:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.432 13:40:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:20:21.432 13:40:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:20:21.432 13:40:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:20:21.432 13:40:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:20:21.432 13:40:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:20:21.432 13:40:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:20:21.432 13:40:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:20:21.432 13:40:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.432 13:40:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:21.432 13:40:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.432 13:40:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:20:21.432 13:40:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.432 13:40:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:21.432 [ 00:20:21.432 { 00:20:21.432 "name": "BaseBdev2", 00:20:21.432 "aliases": [ 00:20:21.432 "24033b02-6099-4908-b0cb-ed76031e6d05" 00:20:21.432 ], 00:20:21.432 "product_name": "Malloc disk", 00:20:21.432 "block_size": 512, 00:20:21.432 "num_blocks": 65536, 00:20:21.432 "uuid": "24033b02-6099-4908-b0cb-ed76031e6d05", 00:20:21.432 "assigned_rate_limits": { 00:20:21.432 "rw_ios_per_sec": 0, 00:20:21.432 "rw_mbytes_per_sec": 0, 00:20:21.432 "r_mbytes_per_sec": 0, 00:20:21.432 "w_mbytes_per_sec": 0 00:20:21.432 }, 00:20:21.432 "claimed": false, 00:20:21.432 "zoned": false, 00:20:21.432 "supported_io_types": { 00:20:21.432 "read": true, 00:20:21.432 "write": true, 00:20:21.432 "unmap": true, 00:20:21.432 "flush": true, 00:20:21.432 "reset": true, 00:20:21.432 "nvme_admin": false, 00:20:21.432 "nvme_io": false, 00:20:21.432 "nvme_io_md": false, 00:20:21.432 "write_zeroes": true, 00:20:21.432 "zcopy": true, 00:20:21.432 "get_zone_info": false, 00:20:21.432 "zone_management": false, 00:20:21.432 "zone_append": false, 00:20:21.432 "compare": false, 00:20:21.432 "compare_and_write": false, 00:20:21.432 "abort": true, 00:20:21.432 "seek_hole": false, 00:20:21.432 "seek_data": false, 00:20:21.432 "copy": true, 00:20:21.432 "nvme_iov_md": false 00:20:21.432 }, 00:20:21.432 "memory_domains": [ 00:20:21.432 { 00:20:21.432 "dma_device_id": "system", 00:20:21.432 "dma_device_type": 1 00:20:21.432 }, 00:20:21.432 { 00:20:21.432 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:21.432 "dma_device_type": 2 00:20:21.432 } 00:20:21.432 ], 00:20:21.432 "driver_specific": {} 00:20:21.432 } 00:20:21.432 ] 00:20:21.432 13:40:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.432 13:40:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:20:21.432 13:40:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:20:21.432 13:40:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:20:21.432 13:40:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:20:21.432 13:40:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.432 13:40:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:21.432 BaseBdev3 00:20:21.432 13:40:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.432 13:40:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:20:21.432 13:40:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:20:21.432 13:40:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:20:21.432 13:40:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:20:21.432 13:40:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:20:21.432 13:40:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:20:21.432 13:40:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:20:21.432 13:40:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.432 13:40:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:21.432 13:40:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.432 13:40:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:20:21.432 13:40:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.432 13:40:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:21.432 [ 00:20:21.432 { 00:20:21.432 "name": "BaseBdev3", 00:20:21.432 "aliases": [ 00:20:21.432 "89d2b397-b7f6-43bf-8793-f186ddfff443" 00:20:21.432 ], 00:20:21.432 "product_name": "Malloc disk", 00:20:21.432 "block_size": 512, 00:20:21.432 "num_blocks": 65536, 00:20:21.432 "uuid": "89d2b397-b7f6-43bf-8793-f186ddfff443", 00:20:21.432 "assigned_rate_limits": { 00:20:21.432 "rw_ios_per_sec": 0, 00:20:21.432 "rw_mbytes_per_sec": 0, 00:20:21.691 "r_mbytes_per_sec": 0, 00:20:21.691 "w_mbytes_per_sec": 0 00:20:21.691 }, 00:20:21.691 "claimed": false, 00:20:21.691 "zoned": false, 00:20:21.691 "supported_io_types": { 00:20:21.691 "read": true, 00:20:21.691 "write": true, 00:20:21.691 "unmap": true, 00:20:21.691 "flush": true, 00:20:21.691 "reset": true, 00:20:21.691 "nvme_admin": false, 00:20:21.691 "nvme_io": false, 00:20:21.691 "nvme_io_md": false, 00:20:21.691 "write_zeroes": true, 00:20:21.691 "zcopy": true, 00:20:21.691 "get_zone_info": false, 00:20:21.691 "zone_management": false, 00:20:21.691 "zone_append": false, 00:20:21.691 "compare": false, 00:20:21.691 "compare_and_write": false, 00:20:21.691 "abort": true, 00:20:21.691 "seek_hole": false, 00:20:21.691 "seek_data": false, 00:20:21.691 "copy": true, 00:20:21.691 "nvme_iov_md": false 00:20:21.691 }, 00:20:21.691 "memory_domains": [ 00:20:21.691 { 00:20:21.691 "dma_device_id": "system", 00:20:21.691 "dma_device_type": 1 00:20:21.691 }, 00:20:21.691 { 00:20:21.691 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:21.691 "dma_device_type": 2 00:20:21.691 } 00:20:21.691 ], 00:20:21.691 "driver_specific": {} 00:20:21.691 } 00:20:21.691 ] 00:20:21.691 13:40:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.691 13:40:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:20:21.691 13:40:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:20:21.691 13:40:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:20:21.691 13:40:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:20:21.691 13:40:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.691 13:40:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:21.691 [2024-11-20 13:40:20.937391] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:21.691 [2024-11-20 13:40:20.937564] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:21.691 [2024-11-20 13:40:20.937713] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:21.691 [2024-11-20 13:40:20.940324] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:21.691 13:40:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.691 13:40:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:20:21.691 13:40:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:21.691 13:40:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:21.691 13:40:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:21.691 13:40:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:21.691 13:40:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:21.691 13:40:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:21.691 13:40:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:21.691 13:40:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:21.691 13:40:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:21.691 13:40:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:21.691 13:40:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:21.691 13:40:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.691 13:40:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:21.691 13:40:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.691 13:40:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:21.691 "name": "Existed_Raid", 00:20:21.691 "uuid": "bdb47c25-73b8-437e-a8aa-b5c9ec4da957", 00:20:21.691 "strip_size_kb": 64, 00:20:21.691 "state": "configuring", 00:20:21.691 "raid_level": "raid5f", 00:20:21.691 "superblock": true, 00:20:21.691 "num_base_bdevs": 3, 00:20:21.691 "num_base_bdevs_discovered": 2, 00:20:21.691 "num_base_bdevs_operational": 3, 00:20:21.691 "base_bdevs_list": [ 00:20:21.691 { 00:20:21.691 "name": "BaseBdev1", 00:20:21.691 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:21.691 "is_configured": false, 00:20:21.691 "data_offset": 0, 00:20:21.691 "data_size": 0 00:20:21.691 }, 00:20:21.691 { 00:20:21.691 "name": "BaseBdev2", 00:20:21.691 "uuid": "24033b02-6099-4908-b0cb-ed76031e6d05", 00:20:21.691 "is_configured": true, 00:20:21.691 "data_offset": 2048, 00:20:21.691 "data_size": 63488 00:20:21.691 }, 00:20:21.691 { 00:20:21.691 "name": "BaseBdev3", 00:20:21.691 "uuid": "89d2b397-b7f6-43bf-8793-f186ddfff443", 00:20:21.691 "is_configured": true, 00:20:21.691 "data_offset": 2048, 00:20:21.691 "data_size": 63488 00:20:21.691 } 00:20:21.691 ] 00:20:21.691 }' 00:20:21.691 13:40:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:21.691 13:40:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:21.950 13:40:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:20:21.950 13:40:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.950 13:40:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:21.950 [2024-11-20 13:40:21.340833] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:20:21.950 13:40:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.950 13:40:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:20:21.950 13:40:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:21.950 13:40:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:21.950 13:40:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:21.950 13:40:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:21.950 13:40:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:21.950 13:40:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:21.950 13:40:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:21.950 13:40:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:21.950 13:40:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:21.950 13:40:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:21.950 13:40:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:21.950 13:40:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.950 13:40:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:21.950 13:40:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.950 13:40:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:21.950 "name": "Existed_Raid", 00:20:21.950 "uuid": "bdb47c25-73b8-437e-a8aa-b5c9ec4da957", 00:20:21.950 "strip_size_kb": 64, 00:20:21.950 "state": "configuring", 00:20:21.950 "raid_level": "raid5f", 00:20:21.950 "superblock": true, 00:20:21.950 "num_base_bdevs": 3, 00:20:21.950 "num_base_bdevs_discovered": 1, 00:20:21.950 "num_base_bdevs_operational": 3, 00:20:21.950 "base_bdevs_list": [ 00:20:21.950 { 00:20:21.950 "name": "BaseBdev1", 00:20:21.950 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:21.950 "is_configured": false, 00:20:21.950 "data_offset": 0, 00:20:21.950 "data_size": 0 00:20:21.950 }, 00:20:21.950 { 00:20:21.950 "name": null, 00:20:21.950 "uuid": "24033b02-6099-4908-b0cb-ed76031e6d05", 00:20:21.950 "is_configured": false, 00:20:21.950 "data_offset": 0, 00:20:21.950 "data_size": 63488 00:20:21.950 }, 00:20:21.950 { 00:20:21.950 "name": "BaseBdev3", 00:20:21.950 "uuid": "89d2b397-b7f6-43bf-8793-f186ddfff443", 00:20:21.950 "is_configured": true, 00:20:21.950 "data_offset": 2048, 00:20:21.950 "data_size": 63488 00:20:21.950 } 00:20:21.950 ] 00:20:21.950 }' 00:20:21.950 13:40:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:21.950 13:40:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:22.582 13:40:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:22.582 13:40:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.582 13:40:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:22.582 13:40:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:20:22.582 13:40:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.582 13:40:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:20:22.582 13:40:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:20:22.582 13:40:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.582 13:40:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:22.582 [2024-11-20 13:40:21.855932] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:22.582 BaseBdev1 00:20:22.582 13:40:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.582 13:40:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:20:22.582 13:40:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:20:22.582 13:40:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:20:22.582 13:40:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:20:22.582 13:40:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:20:22.582 13:40:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:20:22.582 13:40:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:20:22.582 13:40:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.582 13:40:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:22.582 13:40:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.582 13:40:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:20:22.582 13:40:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.582 13:40:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:22.582 [ 00:20:22.582 { 00:20:22.582 "name": "BaseBdev1", 00:20:22.582 "aliases": [ 00:20:22.582 "7a6229c9-87e9-4bcd-a21b-2f77439129d2" 00:20:22.582 ], 00:20:22.582 "product_name": "Malloc disk", 00:20:22.582 "block_size": 512, 00:20:22.582 "num_blocks": 65536, 00:20:22.582 "uuid": "7a6229c9-87e9-4bcd-a21b-2f77439129d2", 00:20:22.582 "assigned_rate_limits": { 00:20:22.582 "rw_ios_per_sec": 0, 00:20:22.582 "rw_mbytes_per_sec": 0, 00:20:22.582 "r_mbytes_per_sec": 0, 00:20:22.582 "w_mbytes_per_sec": 0 00:20:22.582 }, 00:20:22.582 "claimed": true, 00:20:22.582 "claim_type": "exclusive_write", 00:20:22.582 "zoned": false, 00:20:22.582 "supported_io_types": { 00:20:22.582 "read": true, 00:20:22.582 "write": true, 00:20:22.582 "unmap": true, 00:20:22.582 "flush": true, 00:20:22.582 "reset": true, 00:20:22.582 "nvme_admin": false, 00:20:22.582 "nvme_io": false, 00:20:22.582 "nvme_io_md": false, 00:20:22.582 "write_zeroes": true, 00:20:22.582 "zcopy": true, 00:20:22.582 "get_zone_info": false, 00:20:22.582 "zone_management": false, 00:20:22.582 "zone_append": false, 00:20:22.582 "compare": false, 00:20:22.582 "compare_and_write": false, 00:20:22.582 "abort": true, 00:20:22.582 "seek_hole": false, 00:20:22.582 "seek_data": false, 00:20:22.582 "copy": true, 00:20:22.582 "nvme_iov_md": false 00:20:22.582 }, 00:20:22.582 "memory_domains": [ 00:20:22.582 { 00:20:22.582 "dma_device_id": "system", 00:20:22.582 "dma_device_type": 1 00:20:22.582 }, 00:20:22.582 { 00:20:22.582 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:22.582 "dma_device_type": 2 00:20:22.582 } 00:20:22.582 ], 00:20:22.582 "driver_specific": {} 00:20:22.582 } 00:20:22.582 ] 00:20:22.582 13:40:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.582 13:40:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:20:22.582 13:40:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:20:22.582 13:40:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:22.582 13:40:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:22.582 13:40:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:22.582 13:40:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:22.582 13:40:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:22.582 13:40:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:22.582 13:40:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:22.582 13:40:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:22.583 13:40:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:22.583 13:40:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:22.583 13:40:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:22.583 13:40:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.583 13:40:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:22.583 13:40:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.583 13:40:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:22.583 "name": "Existed_Raid", 00:20:22.583 "uuid": "bdb47c25-73b8-437e-a8aa-b5c9ec4da957", 00:20:22.583 "strip_size_kb": 64, 00:20:22.583 "state": "configuring", 00:20:22.583 "raid_level": "raid5f", 00:20:22.583 "superblock": true, 00:20:22.583 "num_base_bdevs": 3, 00:20:22.583 "num_base_bdevs_discovered": 2, 00:20:22.583 "num_base_bdevs_operational": 3, 00:20:22.583 "base_bdevs_list": [ 00:20:22.583 { 00:20:22.583 "name": "BaseBdev1", 00:20:22.583 "uuid": "7a6229c9-87e9-4bcd-a21b-2f77439129d2", 00:20:22.583 "is_configured": true, 00:20:22.583 "data_offset": 2048, 00:20:22.583 "data_size": 63488 00:20:22.583 }, 00:20:22.583 { 00:20:22.583 "name": null, 00:20:22.583 "uuid": "24033b02-6099-4908-b0cb-ed76031e6d05", 00:20:22.583 "is_configured": false, 00:20:22.583 "data_offset": 0, 00:20:22.583 "data_size": 63488 00:20:22.583 }, 00:20:22.583 { 00:20:22.583 "name": "BaseBdev3", 00:20:22.583 "uuid": "89d2b397-b7f6-43bf-8793-f186ddfff443", 00:20:22.583 "is_configured": true, 00:20:22.583 "data_offset": 2048, 00:20:22.583 "data_size": 63488 00:20:22.583 } 00:20:22.583 ] 00:20:22.583 }' 00:20:22.583 13:40:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:22.583 13:40:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:23.150 13:40:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:20:23.150 13:40:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:23.150 13:40:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.150 13:40:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:23.150 13:40:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.150 13:40:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:20:23.150 13:40:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:20:23.150 13:40:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.150 13:40:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:23.150 [2024-11-20 13:40:22.383285] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:20:23.150 13:40:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.150 13:40:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:20:23.150 13:40:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:23.150 13:40:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:23.150 13:40:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:23.150 13:40:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:23.150 13:40:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:23.150 13:40:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:23.150 13:40:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:23.150 13:40:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:23.150 13:40:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:23.150 13:40:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:23.150 13:40:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.150 13:40:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:23.150 13:40:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:23.150 13:40:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.150 13:40:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:23.150 "name": "Existed_Raid", 00:20:23.150 "uuid": "bdb47c25-73b8-437e-a8aa-b5c9ec4da957", 00:20:23.150 "strip_size_kb": 64, 00:20:23.150 "state": "configuring", 00:20:23.150 "raid_level": "raid5f", 00:20:23.150 "superblock": true, 00:20:23.150 "num_base_bdevs": 3, 00:20:23.150 "num_base_bdevs_discovered": 1, 00:20:23.150 "num_base_bdevs_operational": 3, 00:20:23.150 "base_bdevs_list": [ 00:20:23.150 { 00:20:23.150 "name": "BaseBdev1", 00:20:23.150 "uuid": "7a6229c9-87e9-4bcd-a21b-2f77439129d2", 00:20:23.150 "is_configured": true, 00:20:23.150 "data_offset": 2048, 00:20:23.150 "data_size": 63488 00:20:23.150 }, 00:20:23.150 { 00:20:23.150 "name": null, 00:20:23.150 "uuid": "24033b02-6099-4908-b0cb-ed76031e6d05", 00:20:23.150 "is_configured": false, 00:20:23.150 "data_offset": 0, 00:20:23.150 "data_size": 63488 00:20:23.150 }, 00:20:23.150 { 00:20:23.150 "name": null, 00:20:23.150 "uuid": "89d2b397-b7f6-43bf-8793-f186ddfff443", 00:20:23.150 "is_configured": false, 00:20:23.150 "data_offset": 0, 00:20:23.150 "data_size": 63488 00:20:23.150 } 00:20:23.150 ] 00:20:23.150 }' 00:20:23.150 13:40:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:23.150 13:40:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:23.409 13:40:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:23.409 13:40:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:20:23.409 13:40:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.409 13:40:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:23.409 13:40:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.409 13:40:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:20:23.409 13:40:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:20:23.409 13:40:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.409 13:40:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:23.409 [2024-11-20 13:40:22.862993] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:23.409 13:40:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.409 13:40:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:20:23.409 13:40:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:23.409 13:40:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:23.409 13:40:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:23.409 13:40:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:23.409 13:40:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:23.409 13:40:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:23.409 13:40:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:23.409 13:40:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:23.409 13:40:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:23.409 13:40:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:23.409 13:40:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:23.409 13:40:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.409 13:40:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:23.669 13:40:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.670 13:40:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:23.670 "name": "Existed_Raid", 00:20:23.670 "uuid": "bdb47c25-73b8-437e-a8aa-b5c9ec4da957", 00:20:23.670 "strip_size_kb": 64, 00:20:23.670 "state": "configuring", 00:20:23.670 "raid_level": "raid5f", 00:20:23.670 "superblock": true, 00:20:23.670 "num_base_bdevs": 3, 00:20:23.670 "num_base_bdevs_discovered": 2, 00:20:23.670 "num_base_bdevs_operational": 3, 00:20:23.670 "base_bdevs_list": [ 00:20:23.670 { 00:20:23.670 "name": "BaseBdev1", 00:20:23.670 "uuid": "7a6229c9-87e9-4bcd-a21b-2f77439129d2", 00:20:23.670 "is_configured": true, 00:20:23.670 "data_offset": 2048, 00:20:23.670 "data_size": 63488 00:20:23.670 }, 00:20:23.670 { 00:20:23.670 "name": null, 00:20:23.670 "uuid": "24033b02-6099-4908-b0cb-ed76031e6d05", 00:20:23.670 "is_configured": false, 00:20:23.670 "data_offset": 0, 00:20:23.670 "data_size": 63488 00:20:23.670 }, 00:20:23.670 { 00:20:23.670 "name": "BaseBdev3", 00:20:23.670 "uuid": "89d2b397-b7f6-43bf-8793-f186ddfff443", 00:20:23.670 "is_configured": true, 00:20:23.670 "data_offset": 2048, 00:20:23.670 "data_size": 63488 00:20:23.670 } 00:20:23.670 ] 00:20:23.670 }' 00:20:23.670 13:40:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:23.670 13:40:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:23.928 13:40:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:23.928 13:40:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.928 13:40:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:23.928 13:40:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:20:23.928 13:40:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.928 13:40:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:20:23.928 13:40:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:20:23.928 13:40:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.928 13:40:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:23.928 [2024-11-20 13:40:23.330478] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:24.187 13:40:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.187 13:40:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:20:24.187 13:40:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:24.187 13:40:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:24.187 13:40:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:24.187 13:40:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:24.187 13:40:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:24.187 13:40:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:24.187 13:40:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:24.187 13:40:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:24.187 13:40:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:24.187 13:40:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:24.187 13:40:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.187 13:40:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:24.187 13:40:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:24.187 13:40:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.187 13:40:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:24.187 "name": "Existed_Raid", 00:20:24.187 "uuid": "bdb47c25-73b8-437e-a8aa-b5c9ec4da957", 00:20:24.187 "strip_size_kb": 64, 00:20:24.187 "state": "configuring", 00:20:24.188 "raid_level": "raid5f", 00:20:24.188 "superblock": true, 00:20:24.188 "num_base_bdevs": 3, 00:20:24.188 "num_base_bdevs_discovered": 1, 00:20:24.188 "num_base_bdevs_operational": 3, 00:20:24.188 "base_bdevs_list": [ 00:20:24.188 { 00:20:24.188 "name": null, 00:20:24.188 "uuid": "7a6229c9-87e9-4bcd-a21b-2f77439129d2", 00:20:24.188 "is_configured": false, 00:20:24.188 "data_offset": 0, 00:20:24.188 "data_size": 63488 00:20:24.188 }, 00:20:24.188 { 00:20:24.188 "name": null, 00:20:24.188 "uuid": "24033b02-6099-4908-b0cb-ed76031e6d05", 00:20:24.188 "is_configured": false, 00:20:24.188 "data_offset": 0, 00:20:24.188 "data_size": 63488 00:20:24.188 }, 00:20:24.188 { 00:20:24.188 "name": "BaseBdev3", 00:20:24.188 "uuid": "89d2b397-b7f6-43bf-8793-f186ddfff443", 00:20:24.188 "is_configured": true, 00:20:24.188 "data_offset": 2048, 00:20:24.188 "data_size": 63488 00:20:24.188 } 00:20:24.188 ] 00:20:24.188 }' 00:20:24.188 13:40:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:24.188 13:40:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:24.448 13:40:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:20:24.448 13:40:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:24.448 13:40:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.448 13:40:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:24.448 13:40:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.448 13:40:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:20:24.448 13:40:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:20:24.448 13:40:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.448 13:40:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:24.448 [2024-11-20 13:40:23.910299] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:24.448 13:40:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.448 13:40:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:20:24.448 13:40:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:24.448 13:40:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:24.448 13:40:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:24.448 13:40:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:24.448 13:40:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:24.448 13:40:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:24.448 13:40:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:24.448 13:40:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:24.448 13:40:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:24.448 13:40:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:24.448 13:40:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:24.448 13:40:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.448 13:40:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:24.708 13:40:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.708 13:40:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:24.708 "name": "Existed_Raid", 00:20:24.708 "uuid": "bdb47c25-73b8-437e-a8aa-b5c9ec4da957", 00:20:24.708 "strip_size_kb": 64, 00:20:24.708 "state": "configuring", 00:20:24.708 "raid_level": "raid5f", 00:20:24.708 "superblock": true, 00:20:24.708 "num_base_bdevs": 3, 00:20:24.708 "num_base_bdevs_discovered": 2, 00:20:24.708 "num_base_bdevs_operational": 3, 00:20:24.708 "base_bdevs_list": [ 00:20:24.708 { 00:20:24.708 "name": null, 00:20:24.708 "uuid": "7a6229c9-87e9-4bcd-a21b-2f77439129d2", 00:20:24.708 "is_configured": false, 00:20:24.708 "data_offset": 0, 00:20:24.708 "data_size": 63488 00:20:24.708 }, 00:20:24.708 { 00:20:24.708 "name": "BaseBdev2", 00:20:24.708 "uuid": "24033b02-6099-4908-b0cb-ed76031e6d05", 00:20:24.708 "is_configured": true, 00:20:24.708 "data_offset": 2048, 00:20:24.708 "data_size": 63488 00:20:24.708 }, 00:20:24.708 { 00:20:24.708 "name": "BaseBdev3", 00:20:24.708 "uuid": "89d2b397-b7f6-43bf-8793-f186ddfff443", 00:20:24.708 "is_configured": true, 00:20:24.708 "data_offset": 2048, 00:20:24.708 "data_size": 63488 00:20:24.708 } 00:20:24.708 ] 00:20:24.708 }' 00:20:24.708 13:40:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:24.708 13:40:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:24.967 13:40:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:20:24.967 13:40:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:24.967 13:40:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.967 13:40:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:24.967 13:40:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.967 13:40:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:20:24.967 13:40:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:24.967 13:40:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.967 13:40:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:24.967 13:40:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:20:24.967 13:40:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.226 13:40:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 7a6229c9-87e9-4bcd-a21b-2f77439129d2 00:20:25.226 13:40:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.226 13:40:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:25.226 [2024-11-20 13:40:24.497675] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:20:25.226 [2024-11-20 13:40:24.497945] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:20:25.226 [2024-11-20 13:40:24.497965] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:20:25.226 [2024-11-20 13:40:24.498271] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:20:25.226 NewBaseBdev 00:20:25.226 13:40:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.226 13:40:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:20:25.226 13:40:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:20:25.226 13:40:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:20:25.226 13:40:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:20:25.226 13:40:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:20:25.226 13:40:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:20:25.226 13:40:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:20:25.226 13:40:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.226 13:40:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:25.226 [2024-11-20 13:40:24.504127] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:20:25.226 [2024-11-20 13:40:24.504282] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:20:25.226 [2024-11-20 13:40:24.504584] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:25.226 13:40:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.226 13:40:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:20:25.226 13:40:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.226 13:40:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:25.226 [ 00:20:25.226 { 00:20:25.226 "name": "NewBaseBdev", 00:20:25.226 "aliases": [ 00:20:25.226 "7a6229c9-87e9-4bcd-a21b-2f77439129d2" 00:20:25.226 ], 00:20:25.226 "product_name": "Malloc disk", 00:20:25.226 "block_size": 512, 00:20:25.226 "num_blocks": 65536, 00:20:25.226 "uuid": "7a6229c9-87e9-4bcd-a21b-2f77439129d2", 00:20:25.226 "assigned_rate_limits": { 00:20:25.226 "rw_ios_per_sec": 0, 00:20:25.226 "rw_mbytes_per_sec": 0, 00:20:25.226 "r_mbytes_per_sec": 0, 00:20:25.226 "w_mbytes_per_sec": 0 00:20:25.226 }, 00:20:25.226 "claimed": true, 00:20:25.226 "claim_type": "exclusive_write", 00:20:25.226 "zoned": false, 00:20:25.226 "supported_io_types": { 00:20:25.227 "read": true, 00:20:25.227 "write": true, 00:20:25.227 "unmap": true, 00:20:25.227 "flush": true, 00:20:25.227 "reset": true, 00:20:25.227 "nvme_admin": false, 00:20:25.227 "nvme_io": false, 00:20:25.227 "nvme_io_md": false, 00:20:25.227 "write_zeroes": true, 00:20:25.227 "zcopy": true, 00:20:25.227 "get_zone_info": false, 00:20:25.227 "zone_management": false, 00:20:25.227 "zone_append": false, 00:20:25.227 "compare": false, 00:20:25.227 "compare_and_write": false, 00:20:25.227 "abort": true, 00:20:25.227 "seek_hole": false, 00:20:25.227 "seek_data": false, 00:20:25.227 "copy": true, 00:20:25.227 "nvme_iov_md": false 00:20:25.227 }, 00:20:25.227 "memory_domains": [ 00:20:25.227 { 00:20:25.227 "dma_device_id": "system", 00:20:25.227 "dma_device_type": 1 00:20:25.227 }, 00:20:25.227 { 00:20:25.227 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:25.227 "dma_device_type": 2 00:20:25.227 } 00:20:25.227 ], 00:20:25.227 "driver_specific": {} 00:20:25.227 } 00:20:25.227 ] 00:20:25.227 13:40:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.227 13:40:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:20:25.227 13:40:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:20:25.227 13:40:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:25.227 13:40:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:25.227 13:40:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:25.227 13:40:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:25.227 13:40:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:25.227 13:40:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:25.227 13:40:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:25.227 13:40:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:25.227 13:40:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:25.227 13:40:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:25.227 13:40:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:25.227 13:40:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.227 13:40:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:25.227 13:40:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.227 13:40:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:25.227 "name": "Existed_Raid", 00:20:25.227 "uuid": "bdb47c25-73b8-437e-a8aa-b5c9ec4da957", 00:20:25.227 "strip_size_kb": 64, 00:20:25.227 "state": "online", 00:20:25.227 "raid_level": "raid5f", 00:20:25.227 "superblock": true, 00:20:25.227 "num_base_bdevs": 3, 00:20:25.227 "num_base_bdevs_discovered": 3, 00:20:25.227 "num_base_bdevs_operational": 3, 00:20:25.227 "base_bdevs_list": [ 00:20:25.227 { 00:20:25.227 "name": "NewBaseBdev", 00:20:25.227 "uuid": "7a6229c9-87e9-4bcd-a21b-2f77439129d2", 00:20:25.227 "is_configured": true, 00:20:25.227 "data_offset": 2048, 00:20:25.227 "data_size": 63488 00:20:25.227 }, 00:20:25.227 { 00:20:25.227 "name": "BaseBdev2", 00:20:25.227 "uuid": "24033b02-6099-4908-b0cb-ed76031e6d05", 00:20:25.227 "is_configured": true, 00:20:25.227 "data_offset": 2048, 00:20:25.227 "data_size": 63488 00:20:25.227 }, 00:20:25.227 { 00:20:25.227 "name": "BaseBdev3", 00:20:25.227 "uuid": "89d2b397-b7f6-43bf-8793-f186ddfff443", 00:20:25.227 "is_configured": true, 00:20:25.227 "data_offset": 2048, 00:20:25.227 "data_size": 63488 00:20:25.227 } 00:20:25.227 ] 00:20:25.227 }' 00:20:25.227 13:40:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:25.227 13:40:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:25.795 13:40:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:20:25.795 13:40:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:20:25.795 13:40:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:20:25.795 13:40:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:20:25.795 13:40:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:20:25.795 13:40:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:20:25.795 13:40:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:20:25.795 13:40:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:20:25.795 13:40:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.795 13:40:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:25.795 [2024-11-20 13:40:25.003054] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:25.795 13:40:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.795 13:40:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:25.795 "name": "Existed_Raid", 00:20:25.795 "aliases": [ 00:20:25.795 "bdb47c25-73b8-437e-a8aa-b5c9ec4da957" 00:20:25.795 ], 00:20:25.795 "product_name": "Raid Volume", 00:20:25.795 "block_size": 512, 00:20:25.795 "num_blocks": 126976, 00:20:25.795 "uuid": "bdb47c25-73b8-437e-a8aa-b5c9ec4da957", 00:20:25.795 "assigned_rate_limits": { 00:20:25.795 "rw_ios_per_sec": 0, 00:20:25.795 "rw_mbytes_per_sec": 0, 00:20:25.795 "r_mbytes_per_sec": 0, 00:20:25.795 "w_mbytes_per_sec": 0 00:20:25.795 }, 00:20:25.795 "claimed": false, 00:20:25.795 "zoned": false, 00:20:25.795 "supported_io_types": { 00:20:25.795 "read": true, 00:20:25.795 "write": true, 00:20:25.795 "unmap": false, 00:20:25.795 "flush": false, 00:20:25.795 "reset": true, 00:20:25.795 "nvme_admin": false, 00:20:25.795 "nvme_io": false, 00:20:25.795 "nvme_io_md": false, 00:20:25.795 "write_zeroes": true, 00:20:25.795 "zcopy": false, 00:20:25.795 "get_zone_info": false, 00:20:25.795 "zone_management": false, 00:20:25.795 "zone_append": false, 00:20:25.795 "compare": false, 00:20:25.795 "compare_and_write": false, 00:20:25.795 "abort": false, 00:20:25.795 "seek_hole": false, 00:20:25.795 "seek_data": false, 00:20:25.795 "copy": false, 00:20:25.795 "nvme_iov_md": false 00:20:25.795 }, 00:20:25.795 "driver_specific": { 00:20:25.795 "raid": { 00:20:25.795 "uuid": "bdb47c25-73b8-437e-a8aa-b5c9ec4da957", 00:20:25.795 "strip_size_kb": 64, 00:20:25.795 "state": "online", 00:20:25.795 "raid_level": "raid5f", 00:20:25.795 "superblock": true, 00:20:25.795 "num_base_bdevs": 3, 00:20:25.796 "num_base_bdevs_discovered": 3, 00:20:25.796 "num_base_bdevs_operational": 3, 00:20:25.796 "base_bdevs_list": [ 00:20:25.796 { 00:20:25.796 "name": "NewBaseBdev", 00:20:25.796 "uuid": "7a6229c9-87e9-4bcd-a21b-2f77439129d2", 00:20:25.796 "is_configured": true, 00:20:25.796 "data_offset": 2048, 00:20:25.796 "data_size": 63488 00:20:25.796 }, 00:20:25.796 { 00:20:25.796 "name": "BaseBdev2", 00:20:25.796 "uuid": "24033b02-6099-4908-b0cb-ed76031e6d05", 00:20:25.796 "is_configured": true, 00:20:25.796 "data_offset": 2048, 00:20:25.796 "data_size": 63488 00:20:25.796 }, 00:20:25.796 { 00:20:25.796 "name": "BaseBdev3", 00:20:25.796 "uuid": "89d2b397-b7f6-43bf-8793-f186ddfff443", 00:20:25.796 "is_configured": true, 00:20:25.796 "data_offset": 2048, 00:20:25.796 "data_size": 63488 00:20:25.796 } 00:20:25.796 ] 00:20:25.796 } 00:20:25.796 } 00:20:25.796 }' 00:20:25.796 13:40:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:25.796 13:40:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:20:25.796 BaseBdev2 00:20:25.796 BaseBdev3' 00:20:25.796 13:40:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:25.796 13:40:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:20:25.796 13:40:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:25.796 13:40:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:25.796 13:40:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:20:25.796 13:40:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.796 13:40:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:25.796 13:40:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.796 13:40:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:25.796 13:40:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:25.796 13:40:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:25.796 13:40:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:20:25.796 13:40:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.796 13:40:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:25.796 13:40:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:25.796 13:40:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.796 13:40:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:25.796 13:40:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:25.796 13:40:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:25.796 13:40:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:20:25.796 13:40:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.796 13:40:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:25.796 13:40:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:25.796 13:40:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:26.055 13:40:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:26.055 13:40:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:26.055 13:40:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:20:26.055 13:40:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:26.055 13:40:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:26.055 [2024-11-20 13:40:25.290454] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:26.055 [2024-11-20 13:40:25.290484] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:26.055 [2024-11-20 13:40:25.290569] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:26.055 [2024-11-20 13:40:25.290858] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:26.055 [2024-11-20 13:40:25.290875] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:20:26.055 13:40:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:26.055 13:40:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 80315 00:20:26.055 13:40:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 80315 ']' 00:20:26.055 13:40:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 80315 00:20:26.055 13:40:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:20:26.055 13:40:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:26.055 13:40:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80315 00:20:26.055 13:40:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:26.055 killing process with pid 80315 00:20:26.055 13:40:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:26.055 13:40:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80315' 00:20:26.055 13:40:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 80315 00:20:26.055 [2024-11-20 13:40:25.343267] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:26.055 13:40:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 80315 00:20:26.314 [2024-11-20 13:40:25.674332] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:27.715 13:40:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:20:27.715 00:20:27.715 real 0m10.823s 00:20:27.715 user 0m17.086s 00:20:27.715 sys 0m2.236s 00:20:27.715 13:40:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:27.715 ************************************ 00:20:27.715 END TEST raid5f_state_function_test_sb 00:20:27.715 ************************************ 00:20:27.715 13:40:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:27.715 13:40:26 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 3 00:20:27.715 13:40:26 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:20:27.715 13:40:26 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:27.715 13:40:26 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:27.715 ************************************ 00:20:27.715 START TEST raid5f_superblock_test 00:20:27.715 ************************************ 00:20:27.715 13:40:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid5f 3 00:20:27.715 13:40:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 00:20:27.715 13:40:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:20:27.715 13:40:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:20:27.715 13:40:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:20:27.715 13:40:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:20:27.715 13:40:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:20:27.715 13:40:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:20:27.715 13:40:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:20:27.715 13:40:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:20:27.715 13:40:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:20:27.715 13:40:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:20:27.715 13:40:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:20:27.715 13:40:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:20:27.715 13:40:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 00:20:27.716 13:40:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:20:27.716 13:40:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:20:27.716 13:40:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=80930 00:20:27.716 13:40:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:20:27.716 13:40:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 80930 00:20:27.716 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:27.716 13:40:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 80930 ']' 00:20:27.716 13:40:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:27.716 13:40:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:27.716 13:40:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:27.716 13:40:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:27.716 13:40:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:27.716 [2024-11-20 13:40:27.084511] Starting SPDK v25.01-pre git sha1 82b85d9ca / DPDK 24.03.0 initialization... 00:20:27.716 [2024-11-20 13:40:27.084637] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80930 ] 00:20:27.974 [2024-11-20 13:40:27.267444] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:27.974 [2024-11-20 13:40:27.393667] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:28.234 [2024-11-20 13:40:27.621788] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:28.234 [2024-11-20 13:40:27.622074] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:28.492 13:40:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:28.492 13:40:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:20:28.492 13:40:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:20:28.492 13:40:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:20:28.492 13:40:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:20:28.492 13:40:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:20:28.492 13:40:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:20:28.492 13:40:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:20:28.492 13:40:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:20:28.492 13:40:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:20:28.492 13:40:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:20:28.492 13:40:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.492 13:40:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:28.492 malloc1 00:20:28.492 13:40:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.492 13:40:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:20:28.492 13:40:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.492 13:40:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:28.752 [2024-11-20 13:40:27.981837] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:20:28.752 [2024-11-20 13:40:27.982024] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:28.752 [2024-11-20 13:40:27.982074] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:20:28.752 [2024-11-20 13:40:27.982088] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:28.752 [2024-11-20 13:40:27.984474] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:28.752 [2024-11-20 13:40:27.984514] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:20:28.752 pt1 00:20:28.752 13:40:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.752 13:40:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:20:28.752 13:40:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:20:28.752 13:40:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:20:28.752 13:40:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:20:28.752 13:40:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:20:28.752 13:40:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:20:28.752 13:40:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:20:28.752 13:40:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:20:28.752 13:40:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:20:28.752 13:40:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.752 13:40:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:28.752 malloc2 00:20:28.752 13:40:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.752 13:40:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:28.752 13:40:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.752 13:40:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:28.752 [2024-11-20 13:40:28.036364] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:28.752 [2024-11-20 13:40:28.036420] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:28.752 [2024-11-20 13:40:28.036449] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:20:28.752 [2024-11-20 13:40:28.036461] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:28.752 [2024-11-20 13:40:28.038774] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:28.752 [2024-11-20 13:40:28.038814] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:28.752 pt2 00:20:28.752 13:40:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.752 13:40:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:20:28.752 13:40:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:20:28.752 13:40:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:20:28.752 13:40:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:20:28.752 13:40:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:20:28.752 13:40:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:20:28.752 13:40:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:20:28.752 13:40:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:20:28.752 13:40:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:20:28.752 13:40:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.752 13:40:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:28.752 malloc3 00:20:28.752 13:40:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.752 13:40:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:20:28.752 13:40:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.752 13:40:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:28.752 [2024-11-20 13:40:28.104050] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:20:28.753 [2024-11-20 13:40:28.104237] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:28.753 [2024-11-20 13:40:28.104294] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:20:28.753 [2024-11-20 13:40:28.104392] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:28.753 [2024-11-20 13:40:28.106823] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:28.753 [2024-11-20 13:40:28.106978] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:20:28.753 pt3 00:20:28.753 13:40:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.753 13:40:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:20:28.753 13:40:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:20:28.753 13:40:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:20:28.753 13:40:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.753 13:40:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:28.753 [2024-11-20 13:40:28.116099] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:20:28.753 [2024-11-20 13:40:28.118145] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:28.753 [2024-11-20 13:40:28.118212] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:20:28.753 [2024-11-20 13:40:28.118389] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:20:28.753 [2024-11-20 13:40:28.118412] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:20:28.753 [2024-11-20 13:40:28.118657] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:20:28.753 [2024-11-20 13:40:28.125127] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:20:28.753 [2024-11-20 13:40:28.125244] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:20:28.753 [2024-11-20 13:40:28.125565] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:28.753 13:40:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.753 13:40:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:20:28.753 13:40:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:28.753 13:40:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:28.753 13:40:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:28.753 13:40:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:28.753 13:40:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:28.753 13:40:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:28.753 13:40:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:28.753 13:40:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:28.753 13:40:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:28.753 13:40:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:28.753 13:40:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:28.753 13:40:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.753 13:40:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:28.753 13:40:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.753 13:40:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:28.753 "name": "raid_bdev1", 00:20:28.753 "uuid": "be4cb3d2-4255-46ec-b60a-a6d39961a507", 00:20:28.753 "strip_size_kb": 64, 00:20:28.753 "state": "online", 00:20:28.753 "raid_level": "raid5f", 00:20:28.753 "superblock": true, 00:20:28.753 "num_base_bdevs": 3, 00:20:28.753 "num_base_bdevs_discovered": 3, 00:20:28.753 "num_base_bdevs_operational": 3, 00:20:28.753 "base_bdevs_list": [ 00:20:28.753 { 00:20:28.753 "name": "pt1", 00:20:28.753 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:28.753 "is_configured": true, 00:20:28.753 "data_offset": 2048, 00:20:28.753 "data_size": 63488 00:20:28.753 }, 00:20:28.753 { 00:20:28.753 "name": "pt2", 00:20:28.753 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:28.753 "is_configured": true, 00:20:28.753 "data_offset": 2048, 00:20:28.753 "data_size": 63488 00:20:28.753 }, 00:20:28.753 { 00:20:28.753 "name": "pt3", 00:20:28.753 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:28.753 "is_configured": true, 00:20:28.753 "data_offset": 2048, 00:20:28.753 "data_size": 63488 00:20:28.753 } 00:20:28.753 ] 00:20:28.753 }' 00:20:28.753 13:40:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:28.753 13:40:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:29.322 13:40:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:20:29.322 13:40:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:20:29.322 13:40:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:20:29.322 13:40:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:20:29.322 13:40:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:20:29.322 13:40:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:20:29.322 13:40:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:20:29.322 13:40:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:29.322 13:40:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.322 13:40:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:29.322 [2024-11-20 13:40:28.592043] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:29.322 13:40:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.322 13:40:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:29.322 "name": "raid_bdev1", 00:20:29.322 "aliases": [ 00:20:29.322 "be4cb3d2-4255-46ec-b60a-a6d39961a507" 00:20:29.322 ], 00:20:29.322 "product_name": "Raid Volume", 00:20:29.322 "block_size": 512, 00:20:29.322 "num_blocks": 126976, 00:20:29.322 "uuid": "be4cb3d2-4255-46ec-b60a-a6d39961a507", 00:20:29.322 "assigned_rate_limits": { 00:20:29.322 "rw_ios_per_sec": 0, 00:20:29.322 "rw_mbytes_per_sec": 0, 00:20:29.322 "r_mbytes_per_sec": 0, 00:20:29.322 "w_mbytes_per_sec": 0 00:20:29.322 }, 00:20:29.322 "claimed": false, 00:20:29.322 "zoned": false, 00:20:29.322 "supported_io_types": { 00:20:29.322 "read": true, 00:20:29.322 "write": true, 00:20:29.322 "unmap": false, 00:20:29.322 "flush": false, 00:20:29.322 "reset": true, 00:20:29.322 "nvme_admin": false, 00:20:29.322 "nvme_io": false, 00:20:29.322 "nvme_io_md": false, 00:20:29.322 "write_zeroes": true, 00:20:29.322 "zcopy": false, 00:20:29.322 "get_zone_info": false, 00:20:29.322 "zone_management": false, 00:20:29.322 "zone_append": false, 00:20:29.322 "compare": false, 00:20:29.322 "compare_and_write": false, 00:20:29.322 "abort": false, 00:20:29.322 "seek_hole": false, 00:20:29.322 "seek_data": false, 00:20:29.322 "copy": false, 00:20:29.322 "nvme_iov_md": false 00:20:29.322 }, 00:20:29.322 "driver_specific": { 00:20:29.322 "raid": { 00:20:29.322 "uuid": "be4cb3d2-4255-46ec-b60a-a6d39961a507", 00:20:29.322 "strip_size_kb": 64, 00:20:29.322 "state": "online", 00:20:29.322 "raid_level": "raid5f", 00:20:29.322 "superblock": true, 00:20:29.322 "num_base_bdevs": 3, 00:20:29.322 "num_base_bdevs_discovered": 3, 00:20:29.322 "num_base_bdevs_operational": 3, 00:20:29.322 "base_bdevs_list": [ 00:20:29.322 { 00:20:29.322 "name": "pt1", 00:20:29.322 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:29.322 "is_configured": true, 00:20:29.322 "data_offset": 2048, 00:20:29.322 "data_size": 63488 00:20:29.322 }, 00:20:29.322 { 00:20:29.322 "name": "pt2", 00:20:29.322 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:29.322 "is_configured": true, 00:20:29.322 "data_offset": 2048, 00:20:29.322 "data_size": 63488 00:20:29.322 }, 00:20:29.322 { 00:20:29.322 "name": "pt3", 00:20:29.322 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:29.322 "is_configured": true, 00:20:29.322 "data_offset": 2048, 00:20:29.322 "data_size": 63488 00:20:29.322 } 00:20:29.322 ] 00:20:29.322 } 00:20:29.322 } 00:20:29.322 }' 00:20:29.322 13:40:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:29.322 13:40:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:20:29.322 pt2 00:20:29.322 pt3' 00:20:29.322 13:40:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:29.322 13:40:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:20:29.322 13:40:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:29.322 13:40:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:20:29.322 13:40:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.322 13:40:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:29.322 13:40:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:29.322 13:40:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.322 13:40:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:29.322 13:40:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:29.322 13:40:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:29.322 13:40:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:20:29.322 13:40:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.322 13:40:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:29.322 13:40:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:29.322 13:40:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.322 13:40:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:29.322 13:40:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:29.322 13:40:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:29.322 13:40:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:29.322 13:40:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:20:29.322 13:40:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.322 13:40:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:29.582 13:40:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.582 13:40:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:29.582 13:40:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:29.582 13:40:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:29.582 13:40:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.582 13:40:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:29.582 13:40:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:20:29.582 [2024-11-20 13:40:28.835662] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:29.582 13:40:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.582 13:40:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=be4cb3d2-4255-46ec-b60a-a6d39961a507 00:20:29.582 13:40:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z be4cb3d2-4255-46ec-b60a-a6d39961a507 ']' 00:20:29.582 13:40:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:20:29.582 13:40:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.582 13:40:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:29.582 [2024-11-20 13:40:28.871444] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:29.582 [2024-11-20 13:40:28.871475] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:29.582 [2024-11-20 13:40:28.871554] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:29.582 [2024-11-20 13:40:28.871628] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:29.582 [2024-11-20 13:40:28.871639] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:20:29.582 13:40:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.582 13:40:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:29.582 13:40:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.582 13:40:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:29.582 13:40:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:20:29.582 13:40:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.582 13:40:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:20:29.582 13:40:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:20:29.582 13:40:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:20:29.582 13:40:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:20:29.582 13:40:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.582 13:40:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:29.582 13:40:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.582 13:40:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:20:29.582 13:40:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:20:29.582 13:40:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.582 13:40:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:29.582 13:40:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.582 13:40:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:20:29.582 13:40:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:20:29.582 13:40:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.582 13:40:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:29.582 13:40:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.582 13:40:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:20:29.582 13:40:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:20:29.582 13:40:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.582 13:40:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:29.582 13:40:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.582 13:40:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:20:29.582 13:40:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:20:29.582 13:40:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:20:29.583 13:40:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:20:29.583 13:40:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:20:29.583 13:40:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:29.583 13:40:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:20:29.583 13:40:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:29.583 13:40:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:20:29.583 13:40:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.583 13:40:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:29.583 [2024-11-20 13:40:29.019279] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:20:29.583 [2024-11-20 13:40:29.021416] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:20:29.583 [2024-11-20 13:40:29.021606] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:20:29.583 [2024-11-20 13:40:29.021673] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:20:29.583 [2024-11-20 13:40:29.021729] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:20:29.583 [2024-11-20 13:40:29.021751] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:20:29.583 [2024-11-20 13:40:29.021773] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:29.583 [2024-11-20 13:40:29.021784] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:20:29.583 request: 00:20:29.583 { 00:20:29.583 "name": "raid_bdev1", 00:20:29.583 "raid_level": "raid5f", 00:20:29.583 "base_bdevs": [ 00:20:29.583 "malloc1", 00:20:29.583 "malloc2", 00:20:29.583 "malloc3" 00:20:29.583 ], 00:20:29.583 "strip_size_kb": 64, 00:20:29.583 "superblock": false, 00:20:29.583 "method": "bdev_raid_create", 00:20:29.583 "req_id": 1 00:20:29.583 } 00:20:29.583 Got JSON-RPC error response 00:20:29.583 response: 00:20:29.583 { 00:20:29.583 "code": -17, 00:20:29.583 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:20:29.583 } 00:20:29.583 13:40:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:20:29.583 13:40:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:20:29.583 13:40:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:29.583 13:40:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:29.583 13:40:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:29.583 13:40:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:20:29.583 13:40:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:29.583 13:40:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.583 13:40:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:29.583 13:40:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.843 13:40:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:20:29.843 13:40:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:20:29.843 13:40:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:20:29.843 13:40:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.843 13:40:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:29.843 [2024-11-20 13:40:29.079184] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:20:29.843 [2024-11-20 13:40:29.079238] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:29.843 [2024-11-20 13:40:29.079260] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:20:29.843 [2024-11-20 13:40:29.079271] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:29.843 [2024-11-20 13:40:29.081655] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:29.843 [2024-11-20 13:40:29.081693] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:20:29.843 [2024-11-20 13:40:29.081773] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:20:29.843 [2024-11-20 13:40:29.081830] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:20:29.843 pt1 00:20:29.843 13:40:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.843 13:40:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:20:29.843 13:40:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:29.843 13:40:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:29.843 13:40:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:29.843 13:40:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:29.843 13:40:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:29.843 13:40:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:29.843 13:40:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:29.843 13:40:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:29.843 13:40:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:29.843 13:40:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:29.843 13:40:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:29.843 13:40:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.843 13:40:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:29.843 13:40:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.843 13:40:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:29.843 "name": "raid_bdev1", 00:20:29.843 "uuid": "be4cb3d2-4255-46ec-b60a-a6d39961a507", 00:20:29.843 "strip_size_kb": 64, 00:20:29.843 "state": "configuring", 00:20:29.843 "raid_level": "raid5f", 00:20:29.843 "superblock": true, 00:20:29.843 "num_base_bdevs": 3, 00:20:29.843 "num_base_bdevs_discovered": 1, 00:20:29.843 "num_base_bdevs_operational": 3, 00:20:29.843 "base_bdevs_list": [ 00:20:29.843 { 00:20:29.843 "name": "pt1", 00:20:29.843 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:29.843 "is_configured": true, 00:20:29.843 "data_offset": 2048, 00:20:29.843 "data_size": 63488 00:20:29.843 }, 00:20:29.843 { 00:20:29.843 "name": null, 00:20:29.843 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:29.843 "is_configured": false, 00:20:29.843 "data_offset": 2048, 00:20:29.843 "data_size": 63488 00:20:29.843 }, 00:20:29.843 { 00:20:29.843 "name": null, 00:20:29.843 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:29.843 "is_configured": false, 00:20:29.843 "data_offset": 2048, 00:20:29.843 "data_size": 63488 00:20:29.843 } 00:20:29.843 ] 00:20:29.843 }' 00:20:29.843 13:40:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:29.843 13:40:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:30.102 13:40:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:20:30.102 13:40:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:30.102 13:40:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:30.102 13:40:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:30.102 [2024-11-20 13:40:29.550625] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:30.102 [2024-11-20 13:40:29.550833] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:30.102 [2024-11-20 13:40:29.550894] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:20:30.102 [2024-11-20 13:40:29.550978] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:30.102 [2024-11-20 13:40:29.551463] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:30.102 [2024-11-20 13:40:29.551606] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:30.102 [2024-11-20 13:40:29.551794] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:20:30.102 [2024-11-20 13:40:29.551928] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:30.102 pt2 00:20:30.102 13:40:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:30.102 13:40:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:20:30.102 13:40:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:30.102 13:40:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:30.102 [2024-11-20 13:40:29.558606] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:20:30.102 13:40:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:30.102 13:40:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:20:30.102 13:40:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:30.102 13:40:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:30.102 13:40:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:30.102 13:40:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:30.102 13:40:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:30.102 13:40:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:30.102 13:40:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:30.102 13:40:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:30.102 13:40:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:30.102 13:40:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:30.102 13:40:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:30.102 13:40:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:30.102 13:40:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:30.361 13:40:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:30.361 13:40:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:30.361 "name": "raid_bdev1", 00:20:30.361 "uuid": "be4cb3d2-4255-46ec-b60a-a6d39961a507", 00:20:30.361 "strip_size_kb": 64, 00:20:30.361 "state": "configuring", 00:20:30.361 "raid_level": "raid5f", 00:20:30.361 "superblock": true, 00:20:30.361 "num_base_bdevs": 3, 00:20:30.361 "num_base_bdevs_discovered": 1, 00:20:30.361 "num_base_bdevs_operational": 3, 00:20:30.361 "base_bdevs_list": [ 00:20:30.361 { 00:20:30.361 "name": "pt1", 00:20:30.361 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:30.361 "is_configured": true, 00:20:30.361 "data_offset": 2048, 00:20:30.361 "data_size": 63488 00:20:30.361 }, 00:20:30.361 { 00:20:30.361 "name": null, 00:20:30.361 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:30.361 "is_configured": false, 00:20:30.361 "data_offset": 0, 00:20:30.361 "data_size": 63488 00:20:30.361 }, 00:20:30.361 { 00:20:30.361 "name": null, 00:20:30.361 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:30.361 "is_configured": false, 00:20:30.361 "data_offset": 2048, 00:20:30.361 "data_size": 63488 00:20:30.361 } 00:20:30.361 ] 00:20:30.361 }' 00:20:30.361 13:40:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:30.361 13:40:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:30.621 13:40:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:20:30.621 13:40:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:20:30.621 13:40:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:30.621 13:40:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:30.621 13:40:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:30.621 [2024-11-20 13:40:29.966196] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:30.621 [2024-11-20 13:40:29.966280] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:30.621 [2024-11-20 13:40:29.966303] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:20:30.621 [2024-11-20 13:40:29.966317] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:30.621 [2024-11-20 13:40:29.966784] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:30.621 [2024-11-20 13:40:29.966807] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:30.621 [2024-11-20 13:40:29.966890] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:20:30.621 [2024-11-20 13:40:29.966915] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:30.621 pt2 00:20:30.621 13:40:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:30.621 13:40:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:20:30.621 13:40:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:20:30.621 13:40:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:20:30.621 13:40:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:30.621 13:40:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:30.621 [2024-11-20 13:40:29.978181] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:20:30.621 [2024-11-20 13:40:29.978234] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:30.621 [2024-11-20 13:40:29.978252] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:20:30.621 [2024-11-20 13:40:29.978273] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:30.621 [2024-11-20 13:40:29.978645] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:30.621 [2024-11-20 13:40:29.978669] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:20:30.621 [2024-11-20 13:40:29.978732] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:20:30.622 [2024-11-20 13:40:29.978753] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:20:30.622 [2024-11-20 13:40:29.978883] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:20:30.622 [2024-11-20 13:40:29.978898] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:20:30.622 [2024-11-20 13:40:29.979159] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:20:30.622 [2024-11-20 13:40:29.984732] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:20:30.622 [2024-11-20 13:40:29.985590] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:20:30.622 [2024-11-20 13:40:29.985801] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:30.622 pt3 00:20:30.622 13:40:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:30.622 13:40:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:20:30.622 13:40:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:20:30.622 13:40:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:20:30.622 13:40:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:30.622 13:40:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:30.622 13:40:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:30.622 13:40:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:30.622 13:40:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:30.622 13:40:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:30.622 13:40:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:30.622 13:40:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:30.622 13:40:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:30.622 13:40:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:30.622 13:40:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:30.622 13:40:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:30.622 13:40:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:30.622 13:40:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:30.622 13:40:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:30.622 "name": "raid_bdev1", 00:20:30.622 "uuid": "be4cb3d2-4255-46ec-b60a-a6d39961a507", 00:20:30.622 "strip_size_kb": 64, 00:20:30.622 "state": "online", 00:20:30.622 "raid_level": "raid5f", 00:20:30.622 "superblock": true, 00:20:30.622 "num_base_bdevs": 3, 00:20:30.622 "num_base_bdevs_discovered": 3, 00:20:30.622 "num_base_bdevs_operational": 3, 00:20:30.622 "base_bdevs_list": [ 00:20:30.622 { 00:20:30.622 "name": "pt1", 00:20:30.622 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:30.622 "is_configured": true, 00:20:30.622 "data_offset": 2048, 00:20:30.622 "data_size": 63488 00:20:30.622 }, 00:20:30.622 { 00:20:30.622 "name": "pt2", 00:20:30.622 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:30.622 "is_configured": true, 00:20:30.622 "data_offset": 2048, 00:20:30.622 "data_size": 63488 00:20:30.622 }, 00:20:30.622 { 00:20:30.622 "name": "pt3", 00:20:30.622 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:30.622 "is_configured": true, 00:20:30.622 "data_offset": 2048, 00:20:30.622 "data_size": 63488 00:20:30.622 } 00:20:30.622 ] 00:20:30.622 }' 00:20:30.622 13:40:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:30.622 13:40:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:31.190 13:40:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:20:31.190 13:40:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:20:31.190 13:40:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:20:31.190 13:40:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:20:31.190 13:40:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:20:31.190 13:40:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:20:31.190 13:40:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:31.190 13:40:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:31.190 13:40:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:31.190 13:40:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:20:31.190 [2024-11-20 13:40:30.451811] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:31.190 13:40:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:31.190 13:40:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:31.190 "name": "raid_bdev1", 00:20:31.190 "aliases": [ 00:20:31.190 "be4cb3d2-4255-46ec-b60a-a6d39961a507" 00:20:31.190 ], 00:20:31.190 "product_name": "Raid Volume", 00:20:31.190 "block_size": 512, 00:20:31.190 "num_blocks": 126976, 00:20:31.190 "uuid": "be4cb3d2-4255-46ec-b60a-a6d39961a507", 00:20:31.190 "assigned_rate_limits": { 00:20:31.190 "rw_ios_per_sec": 0, 00:20:31.190 "rw_mbytes_per_sec": 0, 00:20:31.190 "r_mbytes_per_sec": 0, 00:20:31.190 "w_mbytes_per_sec": 0 00:20:31.190 }, 00:20:31.190 "claimed": false, 00:20:31.190 "zoned": false, 00:20:31.190 "supported_io_types": { 00:20:31.190 "read": true, 00:20:31.190 "write": true, 00:20:31.190 "unmap": false, 00:20:31.190 "flush": false, 00:20:31.190 "reset": true, 00:20:31.190 "nvme_admin": false, 00:20:31.190 "nvme_io": false, 00:20:31.190 "nvme_io_md": false, 00:20:31.190 "write_zeroes": true, 00:20:31.190 "zcopy": false, 00:20:31.190 "get_zone_info": false, 00:20:31.190 "zone_management": false, 00:20:31.190 "zone_append": false, 00:20:31.190 "compare": false, 00:20:31.190 "compare_and_write": false, 00:20:31.190 "abort": false, 00:20:31.190 "seek_hole": false, 00:20:31.190 "seek_data": false, 00:20:31.190 "copy": false, 00:20:31.190 "nvme_iov_md": false 00:20:31.190 }, 00:20:31.190 "driver_specific": { 00:20:31.190 "raid": { 00:20:31.190 "uuid": "be4cb3d2-4255-46ec-b60a-a6d39961a507", 00:20:31.190 "strip_size_kb": 64, 00:20:31.191 "state": "online", 00:20:31.191 "raid_level": "raid5f", 00:20:31.191 "superblock": true, 00:20:31.191 "num_base_bdevs": 3, 00:20:31.191 "num_base_bdevs_discovered": 3, 00:20:31.191 "num_base_bdevs_operational": 3, 00:20:31.191 "base_bdevs_list": [ 00:20:31.191 { 00:20:31.191 "name": "pt1", 00:20:31.191 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:31.191 "is_configured": true, 00:20:31.191 "data_offset": 2048, 00:20:31.191 "data_size": 63488 00:20:31.191 }, 00:20:31.191 { 00:20:31.191 "name": "pt2", 00:20:31.191 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:31.191 "is_configured": true, 00:20:31.191 "data_offset": 2048, 00:20:31.191 "data_size": 63488 00:20:31.191 }, 00:20:31.191 { 00:20:31.191 "name": "pt3", 00:20:31.191 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:31.191 "is_configured": true, 00:20:31.191 "data_offset": 2048, 00:20:31.191 "data_size": 63488 00:20:31.191 } 00:20:31.191 ] 00:20:31.191 } 00:20:31.191 } 00:20:31.191 }' 00:20:31.191 13:40:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:31.191 13:40:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:20:31.191 pt2 00:20:31.191 pt3' 00:20:31.191 13:40:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:31.191 13:40:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:20:31.191 13:40:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:31.191 13:40:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:20:31.191 13:40:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:31.191 13:40:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:31.191 13:40:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:31.191 13:40:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:31.191 13:40:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:31.191 13:40:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:31.191 13:40:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:31.191 13:40:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:31.191 13:40:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:20:31.191 13:40:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:31.191 13:40:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:31.191 13:40:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:31.191 13:40:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:31.191 13:40:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:31.191 13:40:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:31.191 13:40:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:20:31.191 13:40:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:31.191 13:40:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:31.191 13:40:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:31.451 13:40:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:31.451 13:40:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:31.451 13:40:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:31.451 13:40:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:31.451 13:40:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:31.451 13:40:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:31.451 13:40:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:20:31.451 [2024-11-20 13:40:30.727417] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:31.451 13:40:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:31.451 13:40:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' be4cb3d2-4255-46ec-b60a-a6d39961a507 '!=' be4cb3d2-4255-46ec-b60a-a6d39961a507 ']' 00:20:31.451 13:40:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 00:20:31.451 13:40:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:20:31.451 13:40:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:20:31.451 13:40:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:20:31.451 13:40:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:31.451 13:40:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:31.451 [2024-11-20 13:40:30.767229] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:20:31.451 13:40:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:31.451 13:40:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:20:31.451 13:40:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:31.451 13:40:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:31.451 13:40:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:31.451 13:40:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:31.451 13:40:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:31.451 13:40:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:31.451 13:40:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:31.451 13:40:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:31.451 13:40:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:31.451 13:40:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:31.451 13:40:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:31.451 13:40:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:31.451 13:40:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:31.451 13:40:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:31.451 13:40:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:31.451 "name": "raid_bdev1", 00:20:31.451 "uuid": "be4cb3d2-4255-46ec-b60a-a6d39961a507", 00:20:31.451 "strip_size_kb": 64, 00:20:31.451 "state": "online", 00:20:31.451 "raid_level": "raid5f", 00:20:31.451 "superblock": true, 00:20:31.451 "num_base_bdevs": 3, 00:20:31.451 "num_base_bdevs_discovered": 2, 00:20:31.451 "num_base_bdevs_operational": 2, 00:20:31.451 "base_bdevs_list": [ 00:20:31.451 { 00:20:31.451 "name": null, 00:20:31.451 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:31.451 "is_configured": false, 00:20:31.452 "data_offset": 0, 00:20:31.452 "data_size": 63488 00:20:31.452 }, 00:20:31.452 { 00:20:31.452 "name": "pt2", 00:20:31.452 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:31.452 "is_configured": true, 00:20:31.452 "data_offset": 2048, 00:20:31.452 "data_size": 63488 00:20:31.452 }, 00:20:31.452 { 00:20:31.452 "name": "pt3", 00:20:31.452 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:31.452 "is_configured": true, 00:20:31.452 "data_offset": 2048, 00:20:31.452 "data_size": 63488 00:20:31.452 } 00:20:31.452 ] 00:20:31.452 }' 00:20:31.452 13:40:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:31.452 13:40:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:31.711 13:40:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:20:31.711 13:40:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:31.711 13:40:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:31.711 [2024-11-20 13:40:31.194566] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:31.711 [2024-11-20 13:40:31.194603] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:31.711 [2024-11-20 13:40:31.194680] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:31.711 [2024-11-20 13:40:31.194739] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:31.711 [2024-11-20 13:40:31.194756] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:20:31.971 13:40:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:31.971 13:40:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:31.971 13:40:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:31.971 13:40:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:20:31.971 13:40:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:31.971 13:40:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:31.971 13:40:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:20:31.971 13:40:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:20:31.971 13:40:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:20:31.971 13:40:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:20:31.971 13:40:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:20:31.971 13:40:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:31.971 13:40:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:31.971 13:40:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:31.971 13:40:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:20:31.971 13:40:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:20:31.971 13:40:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:20:31.971 13:40:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:31.971 13:40:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:31.971 13:40:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:31.971 13:40:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:20:31.971 13:40:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:20:31.971 13:40:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:20:31.971 13:40:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:20:31.971 13:40:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:31.971 13:40:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:31.971 13:40:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:31.971 [2024-11-20 13:40:31.274421] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:31.971 [2024-11-20 13:40:31.274485] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:31.971 [2024-11-20 13:40:31.274505] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:20:31.971 [2024-11-20 13:40:31.274519] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:31.971 [2024-11-20 13:40:31.276908] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:31.971 [2024-11-20 13:40:31.276953] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:31.971 [2024-11-20 13:40:31.277031] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:20:31.971 [2024-11-20 13:40:31.277094] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:31.971 pt2 00:20:31.971 13:40:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:31.971 13:40:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:20:31.971 13:40:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:31.971 13:40:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:31.971 13:40:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:31.971 13:40:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:31.971 13:40:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:31.971 13:40:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:31.971 13:40:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:31.971 13:40:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:31.971 13:40:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:31.971 13:40:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:31.971 13:40:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:31.971 13:40:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:31.971 13:40:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:31.971 13:40:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:31.971 13:40:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:31.971 "name": "raid_bdev1", 00:20:31.971 "uuid": "be4cb3d2-4255-46ec-b60a-a6d39961a507", 00:20:31.971 "strip_size_kb": 64, 00:20:31.971 "state": "configuring", 00:20:31.971 "raid_level": "raid5f", 00:20:31.971 "superblock": true, 00:20:31.971 "num_base_bdevs": 3, 00:20:31.971 "num_base_bdevs_discovered": 1, 00:20:31.971 "num_base_bdevs_operational": 2, 00:20:31.971 "base_bdevs_list": [ 00:20:31.971 { 00:20:31.971 "name": null, 00:20:31.971 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:31.971 "is_configured": false, 00:20:31.971 "data_offset": 2048, 00:20:31.971 "data_size": 63488 00:20:31.971 }, 00:20:31.971 { 00:20:31.971 "name": "pt2", 00:20:31.971 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:31.971 "is_configured": true, 00:20:31.971 "data_offset": 2048, 00:20:31.971 "data_size": 63488 00:20:31.971 }, 00:20:31.971 { 00:20:31.971 "name": null, 00:20:31.971 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:31.971 "is_configured": false, 00:20:31.971 "data_offset": 2048, 00:20:31.971 "data_size": 63488 00:20:31.971 } 00:20:31.971 ] 00:20:31.971 }' 00:20:31.971 13:40:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:31.971 13:40:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:32.230 13:40:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:20:32.230 13:40:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:20:32.230 13:40:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:20:32.230 13:40:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:20:32.230 13:40:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.230 13:40:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:32.230 [2024-11-20 13:40:31.686165] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:20:32.230 [2024-11-20 13:40:31.686235] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:32.230 [2024-11-20 13:40:31.686265] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:20:32.230 [2024-11-20 13:40:31.686280] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:32.230 [2024-11-20 13:40:31.686725] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:32.230 [2024-11-20 13:40:31.686747] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:20:32.230 [2024-11-20 13:40:31.686816] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:20:32.230 [2024-11-20 13:40:31.686843] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:20:32.231 [2024-11-20 13:40:31.686951] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:20:32.231 [2024-11-20 13:40:31.686963] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:20:32.231 [2024-11-20 13:40:31.687232] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:20:32.231 [2024-11-20 13:40:31.692342] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:20:32.231 [2024-11-20 13:40:31.692369] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:20:32.231 [2024-11-20 13:40:31.692654] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:32.231 pt3 00:20:32.231 13:40:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:32.231 13:40:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:20:32.231 13:40:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:32.231 13:40:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:32.231 13:40:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:32.231 13:40:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:32.231 13:40:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:32.231 13:40:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:32.231 13:40:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:32.231 13:40:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:32.231 13:40:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:32.231 13:40:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:32.231 13:40:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:32.231 13:40:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.231 13:40:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:32.489 13:40:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:32.489 13:40:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:32.489 "name": "raid_bdev1", 00:20:32.489 "uuid": "be4cb3d2-4255-46ec-b60a-a6d39961a507", 00:20:32.489 "strip_size_kb": 64, 00:20:32.489 "state": "online", 00:20:32.489 "raid_level": "raid5f", 00:20:32.489 "superblock": true, 00:20:32.489 "num_base_bdevs": 3, 00:20:32.489 "num_base_bdevs_discovered": 2, 00:20:32.489 "num_base_bdevs_operational": 2, 00:20:32.489 "base_bdevs_list": [ 00:20:32.489 { 00:20:32.489 "name": null, 00:20:32.489 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:32.489 "is_configured": false, 00:20:32.489 "data_offset": 2048, 00:20:32.489 "data_size": 63488 00:20:32.489 }, 00:20:32.489 { 00:20:32.489 "name": "pt2", 00:20:32.489 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:32.489 "is_configured": true, 00:20:32.489 "data_offset": 2048, 00:20:32.489 "data_size": 63488 00:20:32.489 }, 00:20:32.489 { 00:20:32.489 "name": "pt3", 00:20:32.489 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:32.489 "is_configured": true, 00:20:32.489 "data_offset": 2048, 00:20:32.489 "data_size": 63488 00:20:32.489 } 00:20:32.489 ] 00:20:32.489 }' 00:20:32.489 13:40:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:32.489 13:40:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:32.748 13:40:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:20:32.748 13:40:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.748 13:40:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:32.748 [2024-11-20 13:40:32.074865] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:32.748 [2024-11-20 13:40:32.074904] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:32.748 [2024-11-20 13:40:32.074981] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:32.748 [2024-11-20 13:40:32.075045] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:32.748 [2024-11-20 13:40:32.075070] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:20:32.748 13:40:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:32.748 13:40:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:20:32.748 13:40:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:32.748 13:40:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.748 13:40:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:32.748 13:40:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:32.748 13:40:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:20:32.748 13:40:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:20:32.748 13:40:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:20:32.748 13:40:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:20:32.748 13:40:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 00:20:32.748 13:40:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.748 13:40:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:32.748 13:40:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:32.748 13:40:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:20:32.748 13:40:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.748 13:40:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:32.748 [2024-11-20 13:40:32.126806] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:20:32.748 [2024-11-20 13:40:32.126867] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:32.748 [2024-11-20 13:40:32.126890] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:20:32.748 [2024-11-20 13:40:32.126902] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:32.748 [2024-11-20 13:40:32.129435] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:32.748 [2024-11-20 13:40:32.129475] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:20:32.748 [2024-11-20 13:40:32.129556] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:20:32.748 [2024-11-20 13:40:32.129615] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:20:32.748 [2024-11-20 13:40:32.129760] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:20:32.748 [2024-11-20 13:40:32.129773] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:32.748 [2024-11-20 13:40:32.129791] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:20:32.748 [2024-11-20 13:40:32.129850] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:32.748 pt1 00:20:32.748 13:40:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:32.748 13:40:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:20:32.748 13:40:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:20:32.749 13:40:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:32.749 13:40:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:32.749 13:40:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:32.749 13:40:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:32.749 13:40:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:32.749 13:40:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:32.749 13:40:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:32.749 13:40:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:32.749 13:40:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:32.749 13:40:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:32.749 13:40:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.749 13:40:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:32.749 13:40:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:32.749 13:40:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:32.749 13:40:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:32.749 "name": "raid_bdev1", 00:20:32.749 "uuid": "be4cb3d2-4255-46ec-b60a-a6d39961a507", 00:20:32.749 "strip_size_kb": 64, 00:20:32.749 "state": "configuring", 00:20:32.749 "raid_level": "raid5f", 00:20:32.749 "superblock": true, 00:20:32.749 "num_base_bdevs": 3, 00:20:32.749 "num_base_bdevs_discovered": 1, 00:20:32.749 "num_base_bdevs_operational": 2, 00:20:32.749 "base_bdevs_list": [ 00:20:32.749 { 00:20:32.749 "name": null, 00:20:32.749 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:32.749 "is_configured": false, 00:20:32.749 "data_offset": 2048, 00:20:32.749 "data_size": 63488 00:20:32.749 }, 00:20:32.749 { 00:20:32.749 "name": "pt2", 00:20:32.749 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:32.749 "is_configured": true, 00:20:32.749 "data_offset": 2048, 00:20:32.749 "data_size": 63488 00:20:32.749 }, 00:20:32.749 { 00:20:32.749 "name": null, 00:20:32.749 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:32.749 "is_configured": false, 00:20:32.749 "data_offset": 2048, 00:20:32.749 "data_size": 63488 00:20:32.749 } 00:20:32.749 ] 00:20:32.749 }' 00:20:32.749 13:40:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:32.749 13:40:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:33.316 13:40:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:20:33.316 13:40:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:20:33.316 13:40:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:33.316 13:40:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:33.316 13:40:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:33.316 13:40:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:20:33.316 13:40:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:20:33.316 13:40:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:33.316 13:40:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:33.316 [2024-11-20 13:40:32.614395] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:20:33.316 [2024-11-20 13:40:32.614461] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:33.316 [2024-11-20 13:40:32.614487] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:20:33.316 [2024-11-20 13:40:32.614499] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:33.316 [2024-11-20 13:40:32.614981] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:33.316 [2024-11-20 13:40:32.615001] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:20:33.316 [2024-11-20 13:40:32.615099] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:20:33.316 [2024-11-20 13:40:32.615124] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:20:33.316 [2024-11-20 13:40:32.615257] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:20:33.316 [2024-11-20 13:40:32.615267] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:20:33.316 [2024-11-20 13:40:32.615530] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:20:33.316 [2024-11-20 13:40:32.621134] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:20:33.316 [2024-11-20 13:40:32.621168] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:20:33.316 pt3 00:20:33.316 [2024-11-20 13:40:32.621405] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:33.316 13:40:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:33.316 13:40:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:20:33.316 13:40:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:33.316 13:40:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:33.316 13:40:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:33.316 13:40:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:33.316 13:40:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:33.316 13:40:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:33.316 13:40:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:33.316 13:40:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:33.316 13:40:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:33.316 13:40:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:33.316 13:40:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:33.316 13:40:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:33.316 13:40:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:33.316 13:40:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:33.316 13:40:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:33.316 "name": "raid_bdev1", 00:20:33.316 "uuid": "be4cb3d2-4255-46ec-b60a-a6d39961a507", 00:20:33.316 "strip_size_kb": 64, 00:20:33.316 "state": "online", 00:20:33.316 "raid_level": "raid5f", 00:20:33.316 "superblock": true, 00:20:33.316 "num_base_bdevs": 3, 00:20:33.316 "num_base_bdevs_discovered": 2, 00:20:33.316 "num_base_bdevs_operational": 2, 00:20:33.316 "base_bdevs_list": [ 00:20:33.316 { 00:20:33.316 "name": null, 00:20:33.316 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:33.316 "is_configured": false, 00:20:33.316 "data_offset": 2048, 00:20:33.316 "data_size": 63488 00:20:33.316 }, 00:20:33.316 { 00:20:33.316 "name": "pt2", 00:20:33.316 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:33.316 "is_configured": true, 00:20:33.316 "data_offset": 2048, 00:20:33.316 "data_size": 63488 00:20:33.316 }, 00:20:33.316 { 00:20:33.316 "name": "pt3", 00:20:33.316 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:33.316 "is_configured": true, 00:20:33.316 "data_offset": 2048, 00:20:33.316 "data_size": 63488 00:20:33.316 } 00:20:33.316 ] 00:20:33.316 }' 00:20:33.316 13:40:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:33.316 13:40:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:33.575 13:40:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:20:33.575 13:40:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:20:33.575 13:40:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:33.575 13:40:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:33.575 13:40:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:33.575 13:40:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:20:33.576 13:40:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:33.576 13:40:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:33.576 13:40:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:33.576 13:40:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:20:33.834 [2024-11-20 13:40:33.063606] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:33.834 13:40:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:33.834 13:40:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' be4cb3d2-4255-46ec-b60a-a6d39961a507 '!=' be4cb3d2-4255-46ec-b60a-a6d39961a507 ']' 00:20:33.834 13:40:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 80930 00:20:33.834 13:40:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 80930 ']' 00:20:33.834 13:40:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@958 -- # kill -0 80930 00:20:33.834 13:40:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # uname 00:20:33.834 13:40:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:33.834 13:40:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80930 00:20:33.834 13:40:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:33.834 killing process with pid 80930 00:20:33.834 13:40:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:33.834 13:40:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80930' 00:20:33.834 13:40:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@973 -- # kill 80930 00:20:33.834 [2024-11-20 13:40:33.146278] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:33.834 [2024-11-20 13:40:33.146376] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:33.834 [2024-11-20 13:40:33.146440] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:33.834 [2024-11-20 13:40:33.146455] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:20:33.834 13:40:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@978 -- # wait 80930 00:20:34.093 [2024-11-20 13:40:33.453157] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:35.471 13:40:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:20:35.471 00:20:35.471 real 0m7.606s 00:20:35.471 user 0m11.802s 00:20:35.471 sys 0m1.597s 00:20:35.471 ************************************ 00:20:35.471 END TEST raid5f_superblock_test 00:20:35.471 ************************************ 00:20:35.471 13:40:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:35.471 13:40:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:35.471 13:40:34 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 00:20:35.471 13:40:34 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 3 false false true 00:20:35.471 13:40:34 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:20:35.471 13:40:34 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:35.471 13:40:34 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:35.471 ************************************ 00:20:35.471 START TEST raid5f_rebuild_test 00:20:35.471 ************************************ 00:20:35.471 13:40:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 3 false false true 00:20:35.471 13:40:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:20:35.471 13:40:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 00:20:35.471 13:40:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:20:35.471 13:40:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:20:35.471 13:40:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:20:35.471 13:40:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:20:35.471 13:40:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:35.471 13:40:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:20:35.471 13:40:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:20:35.471 13:40:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:35.471 13:40:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:20:35.471 13:40:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:20:35.471 13:40:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:35.471 13:40:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:20:35.471 13:40:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:20:35.471 13:40:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:35.472 13:40:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:20:35.472 13:40:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:20:35.472 13:40:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:20:35.472 13:40:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:20:35.472 13:40:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:20:35.472 13:40:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:20:35.472 13:40:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:20:35.472 13:40:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:20:35.472 13:40:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:20:35.472 13:40:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:20:35.472 13:40:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:20:35.472 13:40:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:20:35.472 13:40:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=81374 00:20:35.472 13:40:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 81374 00:20:35.472 13:40:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:20:35.472 13:40:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 81374 ']' 00:20:35.472 13:40:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:35.472 13:40:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:35.472 13:40:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:35.472 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:35.472 13:40:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:35.472 13:40:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:35.472 [2024-11-20 13:40:34.770515] Starting SPDK v25.01-pre git sha1 82b85d9ca / DPDK 24.03.0 initialization... 00:20:35.472 [2024-11-20 13:40:34.770844] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:20:35.472 Zero copy mechanism will not be used. 00:20:35.472 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81374 ] 00:20:35.472 [2024-11-20 13:40:34.939457] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:35.730 [2024-11-20 13:40:35.049707] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:36.029 [2024-11-20 13:40:35.259992] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:36.029 [2024-11-20 13:40:35.260033] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:36.288 13:40:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:36.288 13:40:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:20:36.288 13:40:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:20:36.288 13:40:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:20:36.288 13:40:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.288 13:40:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:36.288 BaseBdev1_malloc 00:20:36.288 13:40:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:36.288 13:40:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:20:36.288 13:40:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.288 13:40:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:36.288 [2024-11-20 13:40:35.659177] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:20:36.288 [2024-11-20 13:40:35.659244] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:36.288 [2024-11-20 13:40:35.659275] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:20:36.288 [2024-11-20 13:40:35.659290] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:36.288 [2024-11-20 13:40:35.661616] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:36.288 [2024-11-20 13:40:35.661661] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:20:36.288 BaseBdev1 00:20:36.288 13:40:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:36.288 13:40:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:20:36.288 13:40:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:20:36.288 13:40:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.288 13:40:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:36.288 BaseBdev2_malloc 00:20:36.288 13:40:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:36.288 13:40:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:20:36.288 13:40:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.288 13:40:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:36.288 [2024-11-20 13:40:35.713195] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:20:36.288 [2024-11-20 13:40:35.713267] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:36.288 [2024-11-20 13:40:35.713292] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:20:36.288 [2024-11-20 13:40:35.713307] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:36.288 [2024-11-20 13:40:35.715712] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:36.288 [2024-11-20 13:40:35.715756] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:20:36.288 BaseBdev2 00:20:36.288 13:40:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:36.288 13:40:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:20:36.288 13:40:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:20:36.288 13:40:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.288 13:40:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:36.548 BaseBdev3_malloc 00:20:36.548 13:40:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:36.548 13:40:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:20:36.548 13:40:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.548 13:40:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:36.548 [2024-11-20 13:40:35.785037] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:20:36.548 [2024-11-20 13:40:35.785111] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:36.548 [2024-11-20 13:40:35.785136] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:20:36.548 [2024-11-20 13:40:35.785150] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:36.548 [2024-11-20 13:40:35.787559] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:36.548 [2024-11-20 13:40:35.787604] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:20:36.548 BaseBdev3 00:20:36.548 13:40:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:36.548 13:40:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:20:36.548 13:40:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.548 13:40:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:36.548 spare_malloc 00:20:36.548 13:40:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:36.548 13:40:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:20:36.548 13:40:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.549 13:40:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:36.549 spare_delay 00:20:36.549 13:40:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:36.549 13:40:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:20:36.549 13:40:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.549 13:40:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:36.549 [2024-11-20 13:40:35.854108] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:36.549 [2024-11-20 13:40:35.854168] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:36.549 [2024-11-20 13:40:35.854188] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:20:36.549 [2024-11-20 13:40:35.854202] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:36.549 [2024-11-20 13:40:35.856553] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:36.549 [2024-11-20 13:40:35.856711] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:36.549 spare 00:20:36.549 13:40:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:36.549 13:40:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 00:20:36.549 13:40:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.549 13:40:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:36.549 [2024-11-20 13:40:35.866165] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:36.549 [2024-11-20 13:40:35.868241] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:36.549 [2024-11-20 13:40:35.868307] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:36.549 [2024-11-20 13:40:35.868393] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:20:36.549 [2024-11-20 13:40:35.868406] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:20:36.549 [2024-11-20 13:40:35.868686] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:20:36.549 [2024-11-20 13:40:35.874415] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:20:36.549 [2024-11-20 13:40:35.874540] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:20:36.549 [2024-11-20 13:40:35.874830] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:36.549 13:40:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:36.549 13:40:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:20:36.549 13:40:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:36.549 13:40:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:36.549 13:40:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:36.549 13:40:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:36.549 13:40:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:36.549 13:40:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:36.549 13:40:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:36.549 13:40:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:36.549 13:40:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:36.549 13:40:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:36.549 13:40:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:36.549 13:40:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.549 13:40:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:36.549 13:40:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:36.549 13:40:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:36.549 "name": "raid_bdev1", 00:20:36.549 "uuid": "f2dd1414-34c7-42b7-a053-a2df220dcecf", 00:20:36.549 "strip_size_kb": 64, 00:20:36.549 "state": "online", 00:20:36.549 "raid_level": "raid5f", 00:20:36.549 "superblock": false, 00:20:36.549 "num_base_bdevs": 3, 00:20:36.549 "num_base_bdevs_discovered": 3, 00:20:36.549 "num_base_bdevs_operational": 3, 00:20:36.549 "base_bdevs_list": [ 00:20:36.549 { 00:20:36.549 "name": "BaseBdev1", 00:20:36.549 "uuid": "84c35f80-3231-5c09-bc2a-a51767329923", 00:20:36.549 "is_configured": true, 00:20:36.549 "data_offset": 0, 00:20:36.549 "data_size": 65536 00:20:36.549 }, 00:20:36.549 { 00:20:36.549 "name": "BaseBdev2", 00:20:36.549 "uuid": "6d954ed0-17fc-58d4-9c47-9b2377970606", 00:20:36.549 "is_configured": true, 00:20:36.549 "data_offset": 0, 00:20:36.549 "data_size": 65536 00:20:36.549 }, 00:20:36.549 { 00:20:36.549 "name": "BaseBdev3", 00:20:36.549 "uuid": "07447396-214e-54ff-8a82-7437e59568c2", 00:20:36.549 "is_configured": true, 00:20:36.549 "data_offset": 0, 00:20:36.549 "data_size": 65536 00:20:36.549 } 00:20:36.549 ] 00:20:36.549 }' 00:20:36.549 13:40:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:36.549 13:40:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:37.117 13:40:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:20:37.117 13:40:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:37.117 13:40:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.117 13:40:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:37.117 [2024-11-20 13:40:36.341030] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:37.117 13:40:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.117 13:40:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=131072 00:20:37.117 13:40:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:37.117 13:40:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:20:37.117 13:40:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.117 13:40:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:37.117 13:40:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.117 13:40:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:20:37.117 13:40:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:20:37.117 13:40:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:20:37.117 13:40:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:20:37.117 13:40:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:20:37.117 13:40:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:20:37.117 13:40:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:20:37.117 13:40:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:37.117 13:40:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:20:37.117 13:40:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:37.117 13:40:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:20:37.117 13:40:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:37.117 13:40:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:37.117 13:40:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:20:37.376 [2024-11-20 13:40:36.664399] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:20:37.376 /dev/nbd0 00:20:37.376 13:40:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:20:37.376 13:40:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:20:37.376 13:40:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:20:37.376 13:40:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:20:37.376 13:40:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:20:37.376 13:40:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:20:37.376 13:40:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:20:37.376 13:40:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:20:37.376 13:40:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:20:37.376 13:40:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:20:37.376 13:40:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:37.376 1+0 records in 00:20:37.376 1+0 records out 00:20:37.376 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000332579 s, 12.3 MB/s 00:20:37.376 13:40:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:37.376 13:40:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:20:37.376 13:40:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:37.376 13:40:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:20:37.376 13:40:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:20:37.376 13:40:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:37.376 13:40:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:37.376 13:40:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:20:37.376 13:40:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 00:20:37.376 13:40:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 128 00:20:37.376 13:40:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=512 oflag=direct 00:20:37.945 512+0 records in 00:20:37.945 512+0 records out 00:20:37.945 67108864 bytes (67 MB, 64 MiB) copied, 0.583062 s, 115 MB/s 00:20:37.945 13:40:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:20:37.945 13:40:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:20:37.945 13:40:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:20:37.945 13:40:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:37.945 13:40:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:20:37.945 13:40:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:37.945 13:40:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:20:38.203 [2024-11-20 13:40:37.587170] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:38.203 13:40:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:38.203 13:40:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:38.203 13:40:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:38.203 13:40:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:38.203 13:40:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:38.203 13:40:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:38.203 13:40:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:20:38.203 13:40:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:20:38.203 13:40:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:20:38.203 13:40:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.203 13:40:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:38.203 [2024-11-20 13:40:37.630613] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:38.203 13:40:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.203 13:40:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:20:38.203 13:40:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:38.203 13:40:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:38.203 13:40:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:38.203 13:40:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:38.203 13:40:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:38.203 13:40:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:38.203 13:40:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:38.203 13:40:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:38.203 13:40:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:38.203 13:40:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:38.203 13:40:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.203 13:40:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:38.203 13:40:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:38.203 13:40:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.461 13:40:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:38.462 "name": "raid_bdev1", 00:20:38.462 "uuid": "f2dd1414-34c7-42b7-a053-a2df220dcecf", 00:20:38.462 "strip_size_kb": 64, 00:20:38.462 "state": "online", 00:20:38.462 "raid_level": "raid5f", 00:20:38.462 "superblock": false, 00:20:38.462 "num_base_bdevs": 3, 00:20:38.462 "num_base_bdevs_discovered": 2, 00:20:38.462 "num_base_bdevs_operational": 2, 00:20:38.462 "base_bdevs_list": [ 00:20:38.462 { 00:20:38.462 "name": null, 00:20:38.462 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:38.462 "is_configured": false, 00:20:38.462 "data_offset": 0, 00:20:38.462 "data_size": 65536 00:20:38.462 }, 00:20:38.462 { 00:20:38.462 "name": "BaseBdev2", 00:20:38.462 "uuid": "6d954ed0-17fc-58d4-9c47-9b2377970606", 00:20:38.462 "is_configured": true, 00:20:38.462 "data_offset": 0, 00:20:38.462 "data_size": 65536 00:20:38.462 }, 00:20:38.462 { 00:20:38.462 "name": "BaseBdev3", 00:20:38.462 "uuid": "07447396-214e-54ff-8a82-7437e59568c2", 00:20:38.462 "is_configured": true, 00:20:38.462 "data_offset": 0, 00:20:38.462 "data_size": 65536 00:20:38.462 } 00:20:38.462 ] 00:20:38.462 }' 00:20:38.462 13:40:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:38.462 13:40:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:38.720 13:40:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:20:38.720 13:40:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.720 13:40:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:38.720 [2024-11-20 13:40:38.122476] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:38.720 [2024-11-20 13:40:38.142924] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b680 00:20:38.720 13:40:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.720 13:40:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:20:38.720 [2024-11-20 13:40:38.152460] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:40.095 13:40:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:40.095 13:40:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:40.095 13:40:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:40.095 13:40:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:40.095 13:40:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:40.095 13:40:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:40.095 13:40:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.095 13:40:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:40.095 13:40:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:40.095 13:40:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:40.095 13:40:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:40.095 "name": "raid_bdev1", 00:20:40.095 "uuid": "f2dd1414-34c7-42b7-a053-a2df220dcecf", 00:20:40.095 "strip_size_kb": 64, 00:20:40.095 "state": "online", 00:20:40.095 "raid_level": "raid5f", 00:20:40.095 "superblock": false, 00:20:40.095 "num_base_bdevs": 3, 00:20:40.095 "num_base_bdevs_discovered": 3, 00:20:40.095 "num_base_bdevs_operational": 3, 00:20:40.095 "process": { 00:20:40.095 "type": "rebuild", 00:20:40.095 "target": "spare", 00:20:40.095 "progress": { 00:20:40.095 "blocks": 20480, 00:20:40.095 "percent": 15 00:20:40.095 } 00:20:40.095 }, 00:20:40.095 "base_bdevs_list": [ 00:20:40.095 { 00:20:40.095 "name": "spare", 00:20:40.095 "uuid": "effab0a6-2ed7-54d3-b0f1-3a8fb3f93301", 00:20:40.095 "is_configured": true, 00:20:40.095 "data_offset": 0, 00:20:40.095 "data_size": 65536 00:20:40.095 }, 00:20:40.095 { 00:20:40.095 "name": "BaseBdev2", 00:20:40.095 "uuid": "6d954ed0-17fc-58d4-9c47-9b2377970606", 00:20:40.095 "is_configured": true, 00:20:40.095 "data_offset": 0, 00:20:40.095 "data_size": 65536 00:20:40.095 }, 00:20:40.095 { 00:20:40.095 "name": "BaseBdev3", 00:20:40.095 "uuid": "07447396-214e-54ff-8a82-7437e59568c2", 00:20:40.095 "is_configured": true, 00:20:40.095 "data_offset": 0, 00:20:40.095 "data_size": 65536 00:20:40.095 } 00:20:40.095 ] 00:20:40.095 }' 00:20:40.095 13:40:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:40.095 13:40:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:40.096 13:40:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:40.096 13:40:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:40.096 13:40:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:20:40.096 13:40:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.096 13:40:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:40.096 [2024-11-20 13:40:39.312105] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:40.096 [2024-11-20 13:40:39.363014] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:20:40.096 [2024-11-20 13:40:39.363099] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:40.096 [2024-11-20 13:40:39.363125] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:40.096 [2024-11-20 13:40:39.363136] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:20:40.096 13:40:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:40.096 13:40:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:20:40.096 13:40:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:40.096 13:40:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:40.096 13:40:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:40.096 13:40:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:40.096 13:40:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:40.096 13:40:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:40.096 13:40:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:40.096 13:40:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:40.096 13:40:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:40.096 13:40:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:40.096 13:40:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:40.096 13:40:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.096 13:40:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:40.096 13:40:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:40.096 13:40:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:40.096 "name": "raid_bdev1", 00:20:40.096 "uuid": "f2dd1414-34c7-42b7-a053-a2df220dcecf", 00:20:40.096 "strip_size_kb": 64, 00:20:40.096 "state": "online", 00:20:40.096 "raid_level": "raid5f", 00:20:40.096 "superblock": false, 00:20:40.096 "num_base_bdevs": 3, 00:20:40.096 "num_base_bdevs_discovered": 2, 00:20:40.096 "num_base_bdevs_operational": 2, 00:20:40.096 "base_bdevs_list": [ 00:20:40.096 { 00:20:40.096 "name": null, 00:20:40.096 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:40.096 "is_configured": false, 00:20:40.096 "data_offset": 0, 00:20:40.096 "data_size": 65536 00:20:40.096 }, 00:20:40.096 { 00:20:40.096 "name": "BaseBdev2", 00:20:40.096 "uuid": "6d954ed0-17fc-58d4-9c47-9b2377970606", 00:20:40.096 "is_configured": true, 00:20:40.096 "data_offset": 0, 00:20:40.096 "data_size": 65536 00:20:40.096 }, 00:20:40.096 { 00:20:40.096 "name": "BaseBdev3", 00:20:40.096 "uuid": "07447396-214e-54ff-8a82-7437e59568c2", 00:20:40.096 "is_configured": true, 00:20:40.096 "data_offset": 0, 00:20:40.096 "data_size": 65536 00:20:40.096 } 00:20:40.096 ] 00:20:40.096 }' 00:20:40.096 13:40:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:40.096 13:40:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:40.664 13:40:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:40.664 13:40:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:40.664 13:40:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:40.664 13:40:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:40.664 13:40:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:40.664 13:40:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:40.664 13:40:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:40.664 13:40:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.664 13:40:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:40.664 13:40:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:40.664 13:40:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:40.664 "name": "raid_bdev1", 00:20:40.664 "uuid": "f2dd1414-34c7-42b7-a053-a2df220dcecf", 00:20:40.664 "strip_size_kb": 64, 00:20:40.664 "state": "online", 00:20:40.664 "raid_level": "raid5f", 00:20:40.664 "superblock": false, 00:20:40.664 "num_base_bdevs": 3, 00:20:40.664 "num_base_bdevs_discovered": 2, 00:20:40.664 "num_base_bdevs_operational": 2, 00:20:40.664 "base_bdevs_list": [ 00:20:40.664 { 00:20:40.664 "name": null, 00:20:40.664 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:40.664 "is_configured": false, 00:20:40.664 "data_offset": 0, 00:20:40.664 "data_size": 65536 00:20:40.664 }, 00:20:40.664 { 00:20:40.664 "name": "BaseBdev2", 00:20:40.664 "uuid": "6d954ed0-17fc-58d4-9c47-9b2377970606", 00:20:40.664 "is_configured": true, 00:20:40.664 "data_offset": 0, 00:20:40.664 "data_size": 65536 00:20:40.664 }, 00:20:40.664 { 00:20:40.664 "name": "BaseBdev3", 00:20:40.664 "uuid": "07447396-214e-54ff-8a82-7437e59568c2", 00:20:40.664 "is_configured": true, 00:20:40.664 "data_offset": 0, 00:20:40.664 "data_size": 65536 00:20:40.664 } 00:20:40.664 ] 00:20:40.664 }' 00:20:40.664 13:40:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:40.664 13:40:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:40.664 13:40:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:40.664 13:40:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:40.664 13:40:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:20:40.664 13:40:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.664 13:40:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:40.664 [2024-11-20 13:40:39.987236] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:40.664 [2024-11-20 13:40:40.006434] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b750 00:20:40.664 13:40:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:40.664 13:40:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:20:40.664 [2024-11-20 13:40:40.015606] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:41.601 13:40:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:41.601 13:40:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:41.601 13:40:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:41.601 13:40:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:41.601 13:40:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:41.601 13:40:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:41.601 13:40:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.601 13:40:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:41.601 13:40:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:41.601 13:40:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.601 13:40:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:41.601 "name": "raid_bdev1", 00:20:41.601 "uuid": "f2dd1414-34c7-42b7-a053-a2df220dcecf", 00:20:41.601 "strip_size_kb": 64, 00:20:41.601 "state": "online", 00:20:41.601 "raid_level": "raid5f", 00:20:41.601 "superblock": false, 00:20:41.601 "num_base_bdevs": 3, 00:20:41.601 "num_base_bdevs_discovered": 3, 00:20:41.601 "num_base_bdevs_operational": 3, 00:20:41.601 "process": { 00:20:41.601 "type": "rebuild", 00:20:41.602 "target": "spare", 00:20:41.602 "progress": { 00:20:41.602 "blocks": 20480, 00:20:41.602 "percent": 15 00:20:41.602 } 00:20:41.602 }, 00:20:41.602 "base_bdevs_list": [ 00:20:41.602 { 00:20:41.602 "name": "spare", 00:20:41.602 "uuid": "effab0a6-2ed7-54d3-b0f1-3a8fb3f93301", 00:20:41.602 "is_configured": true, 00:20:41.602 "data_offset": 0, 00:20:41.602 "data_size": 65536 00:20:41.602 }, 00:20:41.602 { 00:20:41.602 "name": "BaseBdev2", 00:20:41.602 "uuid": "6d954ed0-17fc-58d4-9c47-9b2377970606", 00:20:41.602 "is_configured": true, 00:20:41.602 "data_offset": 0, 00:20:41.602 "data_size": 65536 00:20:41.602 }, 00:20:41.602 { 00:20:41.602 "name": "BaseBdev3", 00:20:41.602 "uuid": "07447396-214e-54ff-8a82-7437e59568c2", 00:20:41.602 "is_configured": true, 00:20:41.602 "data_offset": 0, 00:20:41.602 "data_size": 65536 00:20:41.602 } 00:20:41.602 ] 00:20:41.602 }' 00:20:41.602 13:40:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:41.861 13:40:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:41.861 13:40:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:41.861 13:40:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:41.861 13:40:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:20:41.861 13:40:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 00:20:41.861 13:40:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:20:41.861 13:40:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=549 00:20:41.861 13:40:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:41.861 13:40:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:41.861 13:40:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:41.861 13:40:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:41.861 13:40:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:41.861 13:40:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:41.861 13:40:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:41.861 13:40:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.861 13:40:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:41.861 13:40:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:41.861 13:40:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.861 13:40:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:41.861 "name": "raid_bdev1", 00:20:41.861 "uuid": "f2dd1414-34c7-42b7-a053-a2df220dcecf", 00:20:41.861 "strip_size_kb": 64, 00:20:41.861 "state": "online", 00:20:41.861 "raid_level": "raid5f", 00:20:41.861 "superblock": false, 00:20:41.861 "num_base_bdevs": 3, 00:20:41.861 "num_base_bdevs_discovered": 3, 00:20:41.861 "num_base_bdevs_operational": 3, 00:20:41.861 "process": { 00:20:41.861 "type": "rebuild", 00:20:41.861 "target": "spare", 00:20:41.861 "progress": { 00:20:41.861 "blocks": 22528, 00:20:41.861 "percent": 17 00:20:41.861 } 00:20:41.861 }, 00:20:41.861 "base_bdevs_list": [ 00:20:41.861 { 00:20:41.861 "name": "spare", 00:20:41.861 "uuid": "effab0a6-2ed7-54d3-b0f1-3a8fb3f93301", 00:20:41.861 "is_configured": true, 00:20:41.861 "data_offset": 0, 00:20:41.861 "data_size": 65536 00:20:41.861 }, 00:20:41.861 { 00:20:41.861 "name": "BaseBdev2", 00:20:41.861 "uuid": "6d954ed0-17fc-58d4-9c47-9b2377970606", 00:20:41.861 "is_configured": true, 00:20:41.861 "data_offset": 0, 00:20:41.861 "data_size": 65536 00:20:41.861 }, 00:20:41.861 { 00:20:41.861 "name": "BaseBdev3", 00:20:41.861 "uuid": "07447396-214e-54ff-8a82-7437e59568c2", 00:20:41.861 "is_configured": true, 00:20:41.861 "data_offset": 0, 00:20:41.861 "data_size": 65536 00:20:41.861 } 00:20:41.861 ] 00:20:41.861 }' 00:20:41.861 13:40:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:41.861 13:40:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:41.861 13:40:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:41.861 13:40:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:41.861 13:40:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:43.236 13:40:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:43.236 13:40:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:43.236 13:40:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:43.236 13:40:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:43.236 13:40:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:43.236 13:40:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:43.236 13:40:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:43.236 13:40:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.236 13:40:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:43.236 13:40:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:43.236 13:40:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.236 13:40:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:43.236 "name": "raid_bdev1", 00:20:43.236 "uuid": "f2dd1414-34c7-42b7-a053-a2df220dcecf", 00:20:43.236 "strip_size_kb": 64, 00:20:43.236 "state": "online", 00:20:43.236 "raid_level": "raid5f", 00:20:43.236 "superblock": false, 00:20:43.236 "num_base_bdevs": 3, 00:20:43.236 "num_base_bdevs_discovered": 3, 00:20:43.236 "num_base_bdevs_operational": 3, 00:20:43.236 "process": { 00:20:43.236 "type": "rebuild", 00:20:43.236 "target": "spare", 00:20:43.236 "progress": { 00:20:43.236 "blocks": 45056, 00:20:43.236 "percent": 34 00:20:43.236 } 00:20:43.236 }, 00:20:43.236 "base_bdevs_list": [ 00:20:43.236 { 00:20:43.236 "name": "spare", 00:20:43.236 "uuid": "effab0a6-2ed7-54d3-b0f1-3a8fb3f93301", 00:20:43.236 "is_configured": true, 00:20:43.236 "data_offset": 0, 00:20:43.236 "data_size": 65536 00:20:43.236 }, 00:20:43.236 { 00:20:43.236 "name": "BaseBdev2", 00:20:43.236 "uuid": "6d954ed0-17fc-58d4-9c47-9b2377970606", 00:20:43.236 "is_configured": true, 00:20:43.236 "data_offset": 0, 00:20:43.236 "data_size": 65536 00:20:43.236 }, 00:20:43.236 { 00:20:43.236 "name": "BaseBdev3", 00:20:43.236 "uuid": "07447396-214e-54ff-8a82-7437e59568c2", 00:20:43.236 "is_configured": true, 00:20:43.236 "data_offset": 0, 00:20:43.236 "data_size": 65536 00:20:43.236 } 00:20:43.236 ] 00:20:43.236 }' 00:20:43.236 13:40:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:43.236 13:40:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:43.236 13:40:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:43.236 13:40:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:43.236 13:40:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:44.172 13:40:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:44.172 13:40:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:44.172 13:40:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:44.172 13:40:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:44.172 13:40:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:44.172 13:40:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:44.172 13:40:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:44.172 13:40:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:44.172 13:40:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:44.172 13:40:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:44.172 13:40:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:44.172 13:40:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:44.172 "name": "raid_bdev1", 00:20:44.172 "uuid": "f2dd1414-34c7-42b7-a053-a2df220dcecf", 00:20:44.172 "strip_size_kb": 64, 00:20:44.172 "state": "online", 00:20:44.172 "raid_level": "raid5f", 00:20:44.172 "superblock": false, 00:20:44.172 "num_base_bdevs": 3, 00:20:44.172 "num_base_bdevs_discovered": 3, 00:20:44.172 "num_base_bdevs_operational": 3, 00:20:44.172 "process": { 00:20:44.172 "type": "rebuild", 00:20:44.172 "target": "spare", 00:20:44.172 "progress": { 00:20:44.172 "blocks": 69632, 00:20:44.172 "percent": 53 00:20:44.172 } 00:20:44.172 }, 00:20:44.172 "base_bdevs_list": [ 00:20:44.172 { 00:20:44.172 "name": "spare", 00:20:44.172 "uuid": "effab0a6-2ed7-54d3-b0f1-3a8fb3f93301", 00:20:44.172 "is_configured": true, 00:20:44.172 "data_offset": 0, 00:20:44.172 "data_size": 65536 00:20:44.172 }, 00:20:44.172 { 00:20:44.172 "name": "BaseBdev2", 00:20:44.172 "uuid": "6d954ed0-17fc-58d4-9c47-9b2377970606", 00:20:44.172 "is_configured": true, 00:20:44.172 "data_offset": 0, 00:20:44.172 "data_size": 65536 00:20:44.172 }, 00:20:44.172 { 00:20:44.172 "name": "BaseBdev3", 00:20:44.172 "uuid": "07447396-214e-54ff-8a82-7437e59568c2", 00:20:44.172 "is_configured": true, 00:20:44.172 "data_offset": 0, 00:20:44.172 "data_size": 65536 00:20:44.172 } 00:20:44.172 ] 00:20:44.172 }' 00:20:44.172 13:40:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:44.172 13:40:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:44.172 13:40:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:44.172 13:40:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:44.172 13:40:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:45.131 13:40:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:45.131 13:40:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:45.131 13:40:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:45.131 13:40:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:45.131 13:40:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:45.131 13:40:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:45.132 13:40:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:45.132 13:40:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:45.132 13:40:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.132 13:40:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:45.132 13:40:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:45.390 13:40:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:45.390 "name": "raid_bdev1", 00:20:45.390 "uuid": "f2dd1414-34c7-42b7-a053-a2df220dcecf", 00:20:45.390 "strip_size_kb": 64, 00:20:45.390 "state": "online", 00:20:45.390 "raid_level": "raid5f", 00:20:45.390 "superblock": false, 00:20:45.390 "num_base_bdevs": 3, 00:20:45.390 "num_base_bdevs_discovered": 3, 00:20:45.391 "num_base_bdevs_operational": 3, 00:20:45.391 "process": { 00:20:45.391 "type": "rebuild", 00:20:45.391 "target": "spare", 00:20:45.391 "progress": { 00:20:45.391 "blocks": 92160, 00:20:45.391 "percent": 70 00:20:45.391 } 00:20:45.391 }, 00:20:45.391 "base_bdevs_list": [ 00:20:45.391 { 00:20:45.391 "name": "spare", 00:20:45.391 "uuid": "effab0a6-2ed7-54d3-b0f1-3a8fb3f93301", 00:20:45.391 "is_configured": true, 00:20:45.391 "data_offset": 0, 00:20:45.391 "data_size": 65536 00:20:45.391 }, 00:20:45.391 { 00:20:45.391 "name": "BaseBdev2", 00:20:45.391 "uuid": "6d954ed0-17fc-58d4-9c47-9b2377970606", 00:20:45.391 "is_configured": true, 00:20:45.391 "data_offset": 0, 00:20:45.391 "data_size": 65536 00:20:45.391 }, 00:20:45.391 { 00:20:45.391 "name": "BaseBdev3", 00:20:45.391 "uuid": "07447396-214e-54ff-8a82-7437e59568c2", 00:20:45.391 "is_configured": true, 00:20:45.391 "data_offset": 0, 00:20:45.391 "data_size": 65536 00:20:45.391 } 00:20:45.391 ] 00:20:45.391 }' 00:20:45.391 13:40:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:45.391 13:40:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:45.391 13:40:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:45.391 13:40:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:45.391 13:40:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:46.327 13:40:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:46.327 13:40:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:46.327 13:40:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:46.327 13:40:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:46.327 13:40:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:46.327 13:40:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:46.327 13:40:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:46.327 13:40:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.327 13:40:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:46.327 13:40:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:46.327 13:40:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:46.327 13:40:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:46.327 "name": "raid_bdev1", 00:20:46.327 "uuid": "f2dd1414-34c7-42b7-a053-a2df220dcecf", 00:20:46.327 "strip_size_kb": 64, 00:20:46.327 "state": "online", 00:20:46.327 "raid_level": "raid5f", 00:20:46.327 "superblock": false, 00:20:46.327 "num_base_bdevs": 3, 00:20:46.327 "num_base_bdevs_discovered": 3, 00:20:46.327 "num_base_bdevs_operational": 3, 00:20:46.327 "process": { 00:20:46.327 "type": "rebuild", 00:20:46.327 "target": "spare", 00:20:46.327 "progress": { 00:20:46.327 "blocks": 114688, 00:20:46.327 "percent": 87 00:20:46.327 } 00:20:46.327 }, 00:20:46.327 "base_bdevs_list": [ 00:20:46.327 { 00:20:46.327 "name": "spare", 00:20:46.327 "uuid": "effab0a6-2ed7-54d3-b0f1-3a8fb3f93301", 00:20:46.327 "is_configured": true, 00:20:46.327 "data_offset": 0, 00:20:46.327 "data_size": 65536 00:20:46.327 }, 00:20:46.327 { 00:20:46.327 "name": "BaseBdev2", 00:20:46.327 "uuid": "6d954ed0-17fc-58d4-9c47-9b2377970606", 00:20:46.327 "is_configured": true, 00:20:46.327 "data_offset": 0, 00:20:46.327 "data_size": 65536 00:20:46.327 }, 00:20:46.327 { 00:20:46.327 "name": "BaseBdev3", 00:20:46.327 "uuid": "07447396-214e-54ff-8a82-7437e59568c2", 00:20:46.327 "is_configured": true, 00:20:46.327 "data_offset": 0, 00:20:46.327 "data_size": 65536 00:20:46.327 } 00:20:46.327 ] 00:20:46.327 }' 00:20:46.327 13:40:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:46.327 13:40:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:46.327 13:40:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:46.586 13:40:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:46.586 13:40:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:47.152 [2024-11-20 13:40:46.466987] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:20:47.152 [2024-11-20 13:40:46.467118] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:20:47.152 [2024-11-20 13:40:46.467171] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:47.411 13:40:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:47.411 13:40:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:47.411 13:40:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:47.411 13:40:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:47.411 13:40:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:47.411 13:40:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:47.411 13:40:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:47.411 13:40:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.411 13:40:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:47.411 13:40:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:47.411 13:40:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.669 13:40:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:47.669 "name": "raid_bdev1", 00:20:47.669 "uuid": "f2dd1414-34c7-42b7-a053-a2df220dcecf", 00:20:47.669 "strip_size_kb": 64, 00:20:47.669 "state": "online", 00:20:47.669 "raid_level": "raid5f", 00:20:47.669 "superblock": false, 00:20:47.669 "num_base_bdevs": 3, 00:20:47.669 "num_base_bdevs_discovered": 3, 00:20:47.669 "num_base_bdevs_operational": 3, 00:20:47.669 "base_bdevs_list": [ 00:20:47.669 { 00:20:47.669 "name": "spare", 00:20:47.669 "uuid": "effab0a6-2ed7-54d3-b0f1-3a8fb3f93301", 00:20:47.669 "is_configured": true, 00:20:47.669 "data_offset": 0, 00:20:47.669 "data_size": 65536 00:20:47.669 }, 00:20:47.669 { 00:20:47.669 "name": "BaseBdev2", 00:20:47.669 "uuid": "6d954ed0-17fc-58d4-9c47-9b2377970606", 00:20:47.669 "is_configured": true, 00:20:47.669 "data_offset": 0, 00:20:47.669 "data_size": 65536 00:20:47.669 }, 00:20:47.669 { 00:20:47.669 "name": "BaseBdev3", 00:20:47.669 "uuid": "07447396-214e-54ff-8a82-7437e59568c2", 00:20:47.669 "is_configured": true, 00:20:47.669 "data_offset": 0, 00:20:47.669 "data_size": 65536 00:20:47.669 } 00:20:47.669 ] 00:20:47.669 }' 00:20:47.669 13:40:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:47.669 13:40:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:20:47.669 13:40:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:47.669 13:40:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:20:47.669 13:40:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:20:47.669 13:40:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:47.669 13:40:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:47.669 13:40:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:47.669 13:40:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:47.669 13:40:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:47.670 13:40:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:47.670 13:40:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.670 13:40:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:47.670 13:40:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:47.670 13:40:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.670 13:40:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:47.670 "name": "raid_bdev1", 00:20:47.670 "uuid": "f2dd1414-34c7-42b7-a053-a2df220dcecf", 00:20:47.670 "strip_size_kb": 64, 00:20:47.670 "state": "online", 00:20:47.670 "raid_level": "raid5f", 00:20:47.670 "superblock": false, 00:20:47.670 "num_base_bdevs": 3, 00:20:47.670 "num_base_bdevs_discovered": 3, 00:20:47.670 "num_base_bdevs_operational": 3, 00:20:47.670 "base_bdevs_list": [ 00:20:47.670 { 00:20:47.670 "name": "spare", 00:20:47.670 "uuid": "effab0a6-2ed7-54d3-b0f1-3a8fb3f93301", 00:20:47.670 "is_configured": true, 00:20:47.670 "data_offset": 0, 00:20:47.670 "data_size": 65536 00:20:47.670 }, 00:20:47.670 { 00:20:47.670 "name": "BaseBdev2", 00:20:47.670 "uuid": "6d954ed0-17fc-58d4-9c47-9b2377970606", 00:20:47.670 "is_configured": true, 00:20:47.670 "data_offset": 0, 00:20:47.670 "data_size": 65536 00:20:47.670 }, 00:20:47.670 { 00:20:47.670 "name": "BaseBdev3", 00:20:47.670 "uuid": "07447396-214e-54ff-8a82-7437e59568c2", 00:20:47.670 "is_configured": true, 00:20:47.670 "data_offset": 0, 00:20:47.670 "data_size": 65536 00:20:47.670 } 00:20:47.670 ] 00:20:47.670 }' 00:20:47.670 13:40:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:47.670 13:40:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:47.670 13:40:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:47.928 13:40:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:47.928 13:40:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:20:47.928 13:40:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:47.928 13:40:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:47.928 13:40:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:47.928 13:40:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:47.928 13:40:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:47.928 13:40:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:47.928 13:40:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:47.928 13:40:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:47.928 13:40:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:47.928 13:40:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:47.928 13:40:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.928 13:40:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:47.928 13:40:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:47.928 13:40:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.928 13:40:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:47.928 "name": "raid_bdev1", 00:20:47.928 "uuid": "f2dd1414-34c7-42b7-a053-a2df220dcecf", 00:20:47.928 "strip_size_kb": 64, 00:20:47.928 "state": "online", 00:20:47.928 "raid_level": "raid5f", 00:20:47.928 "superblock": false, 00:20:47.928 "num_base_bdevs": 3, 00:20:47.928 "num_base_bdevs_discovered": 3, 00:20:47.928 "num_base_bdevs_operational": 3, 00:20:47.928 "base_bdevs_list": [ 00:20:47.928 { 00:20:47.928 "name": "spare", 00:20:47.928 "uuid": "effab0a6-2ed7-54d3-b0f1-3a8fb3f93301", 00:20:47.929 "is_configured": true, 00:20:47.929 "data_offset": 0, 00:20:47.929 "data_size": 65536 00:20:47.929 }, 00:20:47.929 { 00:20:47.929 "name": "BaseBdev2", 00:20:47.929 "uuid": "6d954ed0-17fc-58d4-9c47-9b2377970606", 00:20:47.929 "is_configured": true, 00:20:47.929 "data_offset": 0, 00:20:47.929 "data_size": 65536 00:20:47.929 }, 00:20:47.929 { 00:20:47.929 "name": "BaseBdev3", 00:20:47.929 "uuid": "07447396-214e-54ff-8a82-7437e59568c2", 00:20:47.929 "is_configured": true, 00:20:47.929 "data_offset": 0, 00:20:47.929 "data_size": 65536 00:20:47.929 } 00:20:47.929 ] 00:20:47.929 }' 00:20:47.929 13:40:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:47.929 13:40:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:48.188 13:40:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:20:48.188 13:40:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.188 13:40:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:48.188 [2024-11-20 13:40:47.646947] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:48.188 [2024-11-20 13:40:47.646986] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:48.188 [2024-11-20 13:40:47.647092] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:48.188 [2024-11-20 13:40:47.647191] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:48.188 [2024-11-20 13:40:47.647219] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:20:48.188 13:40:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.188 13:40:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:48.188 13:40:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:20:48.188 13:40:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.188 13:40:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:48.188 13:40:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.448 13:40:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:20:48.448 13:40:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:20:48.448 13:40:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:20:48.448 13:40:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:20:48.448 13:40:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:20:48.448 13:40:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:20:48.448 13:40:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:48.448 13:40:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:20:48.448 13:40:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:48.448 13:40:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:20:48.448 13:40:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:48.448 13:40:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:20:48.448 13:40:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:20:48.448 /dev/nbd0 00:20:48.448 13:40:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:20:48.708 13:40:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:20:48.708 13:40:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:20:48.708 13:40:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:20:48.708 13:40:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:20:48.708 13:40:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:20:48.708 13:40:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:20:48.708 13:40:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:20:48.708 13:40:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:20:48.708 13:40:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:20:48.708 13:40:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:48.708 1+0 records in 00:20:48.708 1+0 records out 00:20:48.708 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000316877 s, 12.9 MB/s 00:20:48.708 13:40:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:48.708 13:40:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:20:48.708 13:40:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:48.708 13:40:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:20:48.708 13:40:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:20:48.708 13:40:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:48.708 13:40:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:20:48.708 13:40:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:20:48.708 /dev/nbd1 00:20:48.967 13:40:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:20:48.967 13:40:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:20:48.967 13:40:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:20:48.967 13:40:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:20:48.967 13:40:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:20:48.968 13:40:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:20:48.968 13:40:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:20:48.968 13:40:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:20:48.968 13:40:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:20:48.968 13:40:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:20:48.968 13:40:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:48.968 1+0 records in 00:20:48.968 1+0 records out 00:20:48.968 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000452527 s, 9.1 MB/s 00:20:48.968 13:40:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:48.968 13:40:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:20:48.968 13:40:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:48.968 13:40:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:20:48.968 13:40:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:20:48.968 13:40:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:48.968 13:40:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:20:48.968 13:40:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:20:49.227 13:40:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:20:49.227 13:40:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:20:49.227 13:40:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:20:49.227 13:40:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:49.227 13:40:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:20:49.227 13:40:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:49.227 13:40:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:20:49.227 13:40:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:49.227 13:40:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:49.227 13:40:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:49.227 13:40:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:49.227 13:40:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:49.227 13:40:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:49.227 13:40:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:20:49.227 13:40:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:20:49.227 13:40:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:49.227 13:40:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:20:49.486 13:40:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:20:49.486 13:40:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:20:49.486 13:40:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:20:49.486 13:40:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:49.486 13:40:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:49.486 13:40:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:20:49.486 13:40:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:20:49.486 13:40:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:20:49.486 13:40:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:20:49.486 13:40:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 81374 00:20:49.486 13:40:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 81374 ']' 00:20:49.486 13:40:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 81374 00:20:49.486 13:40:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:20:49.486 13:40:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:49.486 13:40:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81374 00:20:49.745 killing process with pid 81374 00:20:49.745 Received shutdown signal, test time was about 60.000000 seconds 00:20:49.745 00:20:49.745 Latency(us) 00:20:49.745 [2024-11-20T13:40:49.230Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:49.745 [2024-11-20T13:40:49.230Z] =================================================================================================================== 00:20:49.745 [2024-11-20T13:40:49.230Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:49.745 13:40:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:49.745 13:40:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:49.746 13:40:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81374' 00:20:49.746 13:40:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@973 -- # kill 81374 00:20:49.746 13:40:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@978 -- # wait 81374 00:20:49.746 [2024-11-20 13:40:48.981567] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:50.029 [2024-11-20 13:40:49.381965] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:51.406 13:40:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:20:51.406 00:20:51.406 real 0m15.844s 00:20:51.406 user 0m19.361s 00:20:51.406 sys 0m2.559s 00:20:51.406 13:40:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:51.406 13:40:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:51.406 ************************************ 00:20:51.406 END TEST raid5f_rebuild_test 00:20:51.406 ************************************ 00:20:51.406 13:40:50 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 3 true false true 00:20:51.406 13:40:50 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:20:51.406 13:40:50 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:51.406 13:40:50 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:51.406 ************************************ 00:20:51.406 START TEST raid5f_rebuild_test_sb 00:20:51.406 ************************************ 00:20:51.406 13:40:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 3 true false true 00:20:51.406 13:40:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:20:51.406 13:40:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 00:20:51.406 13:40:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:20:51.406 13:40:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:20:51.406 13:40:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:20:51.406 13:40:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:20:51.406 13:40:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:51.406 13:40:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:20:51.406 13:40:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:20:51.406 13:40:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:51.406 13:40:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:20:51.406 13:40:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:20:51.406 13:40:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:51.406 13:40:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:20:51.406 13:40:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:20:51.406 13:40:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:51.406 13:40:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:20:51.406 13:40:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:20:51.406 13:40:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:20:51.406 13:40:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:20:51.406 13:40:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:20:51.406 13:40:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:20:51.406 13:40:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:20:51.406 13:40:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:20:51.406 13:40:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:20:51.406 13:40:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:20:51.406 13:40:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:20:51.406 13:40:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:20:51.406 13:40:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:20:51.406 13:40:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=81814 00:20:51.406 13:40:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 81814 00:20:51.406 13:40:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:20:51.406 13:40:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 81814 ']' 00:20:51.406 13:40:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:51.406 13:40:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:51.406 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:51.406 13:40:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:51.406 13:40:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:51.406 13:40:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:51.406 I/O size of 3145728 is greater than zero copy threshold (65536). 00:20:51.406 Zero copy mechanism will not be used. 00:20:51.407 [2024-11-20 13:40:50.702086] Starting SPDK v25.01-pre git sha1 82b85d9ca / DPDK 24.03.0 initialization... 00:20:51.407 [2024-11-20 13:40:50.702241] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81814 ] 00:20:51.407 [2024-11-20 13:40:50.881668] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:51.664 [2024-11-20 13:40:50.996413] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:51.921 [2024-11-20 13:40:51.210112] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:51.921 [2024-11-20 13:40:51.210152] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:52.200 13:40:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:52.200 13:40:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:20:52.200 13:40:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:20:52.200 13:40:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:20:52.200 13:40:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:52.200 13:40:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:52.200 BaseBdev1_malloc 00:20:52.200 13:40:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:52.200 13:40:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:20:52.200 13:40:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:52.200 13:40:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:52.200 [2024-11-20 13:40:51.592152] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:20:52.200 [2024-11-20 13:40:51.592223] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:52.200 [2024-11-20 13:40:51.592248] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:20:52.200 [2024-11-20 13:40:51.592264] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:52.200 [2024-11-20 13:40:51.594772] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:52.200 [2024-11-20 13:40:51.594836] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:20:52.200 BaseBdev1 00:20:52.200 13:40:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:52.200 13:40:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:20:52.200 13:40:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:20:52.200 13:40:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:52.200 13:40:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:52.200 BaseBdev2_malloc 00:20:52.200 13:40:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:52.200 13:40:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:20:52.200 13:40:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:52.200 13:40:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:52.200 [2024-11-20 13:40:51.650890] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:20:52.200 [2024-11-20 13:40:51.650958] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:52.200 [2024-11-20 13:40:51.650987] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:20:52.200 [2024-11-20 13:40:51.651015] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:52.200 [2024-11-20 13:40:51.653531] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:52.200 [2024-11-20 13:40:51.653572] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:20:52.200 BaseBdev2 00:20:52.200 13:40:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:52.200 13:40:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:20:52.200 13:40:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:20:52.200 13:40:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:52.200 13:40:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:52.457 BaseBdev3_malloc 00:20:52.457 13:40:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:52.457 13:40:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:20:52.457 13:40:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:52.457 13:40:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:52.457 [2024-11-20 13:40:51.717681] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:20:52.457 [2024-11-20 13:40:51.717746] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:52.457 [2024-11-20 13:40:51.717772] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:20:52.457 [2024-11-20 13:40:51.717788] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:52.457 [2024-11-20 13:40:51.720428] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:52.457 [2024-11-20 13:40:51.720476] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:20:52.457 BaseBdev3 00:20:52.457 13:40:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:52.457 13:40:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:20:52.457 13:40:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:52.457 13:40:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:52.457 spare_malloc 00:20:52.457 13:40:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:52.457 13:40:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:20:52.457 13:40:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:52.457 13:40:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:52.457 spare_delay 00:20:52.457 13:40:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:52.457 13:40:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:20:52.457 13:40:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:52.457 13:40:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:52.457 [2024-11-20 13:40:51.787772] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:52.457 [2024-11-20 13:40:51.787844] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:52.457 [2024-11-20 13:40:51.787869] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:20:52.457 [2024-11-20 13:40:51.787885] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:52.457 [2024-11-20 13:40:51.790509] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:52.457 [2024-11-20 13:40:51.790563] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:52.457 spare 00:20:52.457 13:40:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:52.457 13:40:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 00:20:52.457 13:40:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:52.457 13:40:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:52.457 [2024-11-20 13:40:51.799842] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:52.457 [2024-11-20 13:40:51.802062] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:52.457 [2024-11-20 13:40:51.802156] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:52.457 [2024-11-20 13:40:51.802368] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:20:52.457 [2024-11-20 13:40:51.802387] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:20:52.457 [2024-11-20 13:40:51.802688] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:20:52.457 [2024-11-20 13:40:51.808832] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:20:52.457 [2024-11-20 13:40:51.808866] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:20:52.457 [2024-11-20 13:40:51.809136] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:52.457 13:40:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:52.457 13:40:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:20:52.457 13:40:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:52.457 13:40:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:52.457 13:40:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:52.457 13:40:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:52.457 13:40:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:52.457 13:40:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:52.457 13:40:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:52.457 13:40:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:52.457 13:40:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:52.457 13:40:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:52.457 13:40:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:52.457 13:40:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:52.457 13:40:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:52.457 13:40:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:52.457 13:40:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:52.457 "name": "raid_bdev1", 00:20:52.457 "uuid": "e5a158d3-3130-4614-8a56-b805e70ff902", 00:20:52.457 "strip_size_kb": 64, 00:20:52.457 "state": "online", 00:20:52.457 "raid_level": "raid5f", 00:20:52.457 "superblock": true, 00:20:52.457 "num_base_bdevs": 3, 00:20:52.457 "num_base_bdevs_discovered": 3, 00:20:52.457 "num_base_bdevs_operational": 3, 00:20:52.457 "base_bdevs_list": [ 00:20:52.457 { 00:20:52.457 "name": "BaseBdev1", 00:20:52.457 "uuid": "655d615e-6891-57e9-8484-30a5820ced9a", 00:20:52.457 "is_configured": true, 00:20:52.457 "data_offset": 2048, 00:20:52.457 "data_size": 63488 00:20:52.457 }, 00:20:52.457 { 00:20:52.457 "name": "BaseBdev2", 00:20:52.457 "uuid": "67a3a48f-b505-54d7-98c6-b50d8c078b14", 00:20:52.457 "is_configured": true, 00:20:52.457 "data_offset": 2048, 00:20:52.457 "data_size": 63488 00:20:52.457 }, 00:20:52.457 { 00:20:52.457 "name": "BaseBdev3", 00:20:52.457 "uuid": "e9016150-a0e0-5579-80ff-680b4057dc06", 00:20:52.457 "is_configured": true, 00:20:52.457 "data_offset": 2048, 00:20:52.457 "data_size": 63488 00:20:52.457 } 00:20:52.457 ] 00:20:52.457 }' 00:20:52.457 13:40:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:52.457 13:40:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:53.023 13:40:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:53.023 13:40:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:20:53.023 13:40:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:53.023 13:40:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:53.023 [2024-11-20 13:40:52.255580] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:53.023 13:40:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:53.023 13:40:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=126976 00:20:53.023 13:40:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:53.023 13:40:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:53.023 13:40:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:53.023 13:40:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:20:53.024 13:40:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:53.024 13:40:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:20:53.024 13:40:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:20:53.024 13:40:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:20:53.024 13:40:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:20:53.024 13:40:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:20:53.024 13:40:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:20:53.024 13:40:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:20:53.024 13:40:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:53.024 13:40:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:20:53.024 13:40:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:53.024 13:40:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:20:53.024 13:40:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:53.024 13:40:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:53.024 13:40:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:20:53.281 [2024-11-20 13:40:52.559011] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:20:53.282 /dev/nbd0 00:20:53.282 13:40:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:20:53.282 13:40:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:20:53.282 13:40:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:20:53.282 13:40:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:20:53.282 13:40:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:20:53.282 13:40:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:20:53.282 13:40:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:20:53.282 13:40:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:20:53.282 13:40:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:20:53.282 13:40:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:20:53.282 13:40:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:53.282 1+0 records in 00:20:53.282 1+0 records out 00:20:53.282 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000282262 s, 14.5 MB/s 00:20:53.282 13:40:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:53.282 13:40:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:20:53.282 13:40:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:53.282 13:40:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:20:53.282 13:40:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:20:53.282 13:40:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:53.282 13:40:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:53.282 13:40:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:20:53.282 13:40:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 00:20:53.282 13:40:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 128 00:20:53.282 13:40:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=496 oflag=direct 00:20:53.849 496+0 records in 00:20:53.849 496+0 records out 00:20:53.849 65011712 bytes (65 MB, 62 MiB) copied, 0.412276 s, 158 MB/s 00:20:53.849 13:40:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:20:53.849 13:40:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:20:53.849 13:40:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:20:53.849 13:40:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:53.849 13:40:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:20:53.849 13:40:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:53.849 13:40:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:20:53.849 [2024-11-20 13:40:53.329998] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:54.108 13:40:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:54.108 13:40:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:54.108 13:40:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:54.108 13:40:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:54.108 13:40:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:54.108 13:40:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:54.108 13:40:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:20:54.108 13:40:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:20:54.108 13:40:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:20:54.108 13:40:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.108 13:40:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:54.108 [2024-11-20 13:40:53.373616] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:54.108 13:40:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.108 13:40:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:20:54.108 13:40:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:54.108 13:40:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:54.108 13:40:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:54.108 13:40:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:54.108 13:40:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:54.108 13:40:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:54.108 13:40:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:54.108 13:40:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:54.108 13:40:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:54.108 13:40:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:54.108 13:40:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.108 13:40:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:54.108 13:40:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:54.108 13:40:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.108 13:40:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:54.108 "name": "raid_bdev1", 00:20:54.108 "uuid": "e5a158d3-3130-4614-8a56-b805e70ff902", 00:20:54.108 "strip_size_kb": 64, 00:20:54.108 "state": "online", 00:20:54.108 "raid_level": "raid5f", 00:20:54.108 "superblock": true, 00:20:54.108 "num_base_bdevs": 3, 00:20:54.108 "num_base_bdevs_discovered": 2, 00:20:54.108 "num_base_bdevs_operational": 2, 00:20:54.108 "base_bdevs_list": [ 00:20:54.108 { 00:20:54.108 "name": null, 00:20:54.108 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:54.108 "is_configured": false, 00:20:54.108 "data_offset": 0, 00:20:54.108 "data_size": 63488 00:20:54.108 }, 00:20:54.108 { 00:20:54.108 "name": "BaseBdev2", 00:20:54.108 "uuid": "67a3a48f-b505-54d7-98c6-b50d8c078b14", 00:20:54.108 "is_configured": true, 00:20:54.108 "data_offset": 2048, 00:20:54.108 "data_size": 63488 00:20:54.108 }, 00:20:54.108 { 00:20:54.108 "name": "BaseBdev3", 00:20:54.108 "uuid": "e9016150-a0e0-5579-80ff-680b4057dc06", 00:20:54.108 "is_configured": true, 00:20:54.108 "data_offset": 2048, 00:20:54.108 "data_size": 63488 00:20:54.108 } 00:20:54.108 ] 00:20:54.108 }' 00:20:54.108 13:40:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:54.109 13:40:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:54.368 13:40:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:20:54.368 13:40:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.368 13:40:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:54.368 [2024-11-20 13:40:53.829078] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:54.368 [2024-11-20 13:40:53.848617] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000028f80 00:20:54.368 13:40:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.368 13:40:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:20:54.627 [2024-11-20 13:40:53.857681] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:55.568 13:40:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:55.568 13:40:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:55.568 13:40:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:55.568 13:40:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:55.568 13:40:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:55.568 13:40:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:55.568 13:40:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:55.568 13:40:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:55.568 13:40:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:55.568 13:40:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:55.568 13:40:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:55.568 "name": "raid_bdev1", 00:20:55.568 "uuid": "e5a158d3-3130-4614-8a56-b805e70ff902", 00:20:55.568 "strip_size_kb": 64, 00:20:55.568 "state": "online", 00:20:55.568 "raid_level": "raid5f", 00:20:55.568 "superblock": true, 00:20:55.568 "num_base_bdevs": 3, 00:20:55.568 "num_base_bdevs_discovered": 3, 00:20:55.568 "num_base_bdevs_operational": 3, 00:20:55.568 "process": { 00:20:55.568 "type": "rebuild", 00:20:55.568 "target": "spare", 00:20:55.568 "progress": { 00:20:55.568 "blocks": 18432, 00:20:55.568 "percent": 14 00:20:55.568 } 00:20:55.568 }, 00:20:55.568 "base_bdevs_list": [ 00:20:55.568 { 00:20:55.568 "name": "spare", 00:20:55.568 "uuid": "95ceeebe-1196-59df-9263-61633402193a", 00:20:55.568 "is_configured": true, 00:20:55.568 "data_offset": 2048, 00:20:55.568 "data_size": 63488 00:20:55.568 }, 00:20:55.568 { 00:20:55.568 "name": "BaseBdev2", 00:20:55.568 "uuid": "67a3a48f-b505-54d7-98c6-b50d8c078b14", 00:20:55.568 "is_configured": true, 00:20:55.568 "data_offset": 2048, 00:20:55.568 "data_size": 63488 00:20:55.568 }, 00:20:55.568 { 00:20:55.568 "name": "BaseBdev3", 00:20:55.568 "uuid": "e9016150-a0e0-5579-80ff-680b4057dc06", 00:20:55.568 "is_configured": true, 00:20:55.568 "data_offset": 2048, 00:20:55.568 "data_size": 63488 00:20:55.568 } 00:20:55.568 ] 00:20:55.568 }' 00:20:55.568 13:40:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:55.568 13:40:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:55.568 13:40:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:55.568 13:40:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:55.568 13:40:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:20:55.568 13:40:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:55.568 13:40:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:55.568 [2024-11-20 13:40:54.993596] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:55.841 [2024-11-20 13:40:55.067723] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:20:55.841 [2024-11-20 13:40:55.067800] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:55.841 [2024-11-20 13:40:55.067825] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:55.841 [2024-11-20 13:40:55.067836] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:20:55.841 13:40:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:55.841 13:40:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:20:55.841 13:40:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:55.841 13:40:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:55.841 13:40:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:55.841 13:40:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:55.841 13:40:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:55.841 13:40:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:55.841 13:40:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:55.841 13:40:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:55.841 13:40:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:55.841 13:40:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:55.841 13:40:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:55.841 13:40:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:55.841 13:40:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:55.841 13:40:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:55.841 13:40:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:55.841 "name": "raid_bdev1", 00:20:55.841 "uuid": "e5a158d3-3130-4614-8a56-b805e70ff902", 00:20:55.841 "strip_size_kb": 64, 00:20:55.841 "state": "online", 00:20:55.841 "raid_level": "raid5f", 00:20:55.841 "superblock": true, 00:20:55.841 "num_base_bdevs": 3, 00:20:55.841 "num_base_bdevs_discovered": 2, 00:20:55.841 "num_base_bdevs_operational": 2, 00:20:55.841 "base_bdevs_list": [ 00:20:55.841 { 00:20:55.841 "name": null, 00:20:55.841 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:55.841 "is_configured": false, 00:20:55.841 "data_offset": 0, 00:20:55.841 "data_size": 63488 00:20:55.841 }, 00:20:55.841 { 00:20:55.841 "name": "BaseBdev2", 00:20:55.841 "uuid": "67a3a48f-b505-54d7-98c6-b50d8c078b14", 00:20:55.841 "is_configured": true, 00:20:55.841 "data_offset": 2048, 00:20:55.841 "data_size": 63488 00:20:55.841 }, 00:20:55.841 { 00:20:55.841 "name": "BaseBdev3", 00:20:55.841 "uuid": "e9016150-a0e0-5579-80ff-680b4057dc06", 00:20:55.841 "is_configured": true, 00:20:55.841 "data_offset": 2048, 00:20:55.841 "data_size": 63488 00:20:55.841 } 00:20:55.841 ] 00:20:55.841 }' 00:20:55.841 13:40:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:55.841 13:40:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:56.100 13:40:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:56.100 13:40:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:56.100 13:40:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:56.100 13:40:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:56.100 13:40:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:56.100 13:40:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:56.100 13:40:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:56.100 13:40:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:56.100 13:40:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:56.100 13:40:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:56.100 13:40:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:56.100 "name": "raid_bdev1", 00:20:56.100 "uuid": "e5a158d3-3130-4614-8a56-b805e70ff902", 00:20:56.100 "strip_size_kb": 64, 00:20:56.100 "state": "online", 00:20:56.100 "raid_level": "raid5f", 00:20:56.100 "superblock": true, 00:20:56.100 "num_base_bdevs": 3, 00:20:56.100 "num_base_bdevs_discovered": 2, 00:20:56.100 "num_base_bdevs_operational": 2, 00:20:56.100 "base_bdevs_list": [ 00:20:56.100 { 00:20:56.100 "name": null, 00:20:56.100 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:56.100 "is_configured": false, 00:20:56.100 "data_offset": 0, 00:20:56.100 "data_size": 63488 00:20:56.100 }, 00:20:56.100 { 00:20:56.100 "name": "BaseBdev2", 00:20:56.100 "uuid": "67a3a48f-b505-54d7-98c6-b50d8c078b14", 00:20:56.100 "is_configured": true, 00:20:56.100 "data_offset": 2048, 00:20:56.100 "data_size": 63488 00:20:56.100 }, 00:20:56.100 { 00:20:56.100 "name": "BaseBdev3", 00:20:56.100 "uuid": "e9016150-a0e0-5579-80ff-680b4057dc06", 00:20:56.100 "is_configured": true, 00:20:56.100 "data_offset": 2048, 00:20:56.100 "data_size": 63488 00:20:56.100 } 00:20:56.100 ] 00:20:56.100 }' 00:20:56.100 13:40:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:56.359 13:40:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:56.359 13:40:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:56.359 13:40:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:56.359 13:40:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:20:56.359 13:40:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:56.359 13:40:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:56.359 [2024-11-20 13:40:55.677159] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:56.359 [2024-11-20 13:40:55.695888] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000029050 00:20:56.359 13:40:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:56.359 13:40:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:20:56.359 [2024-11-20 13:40:55.704705] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:57.294 13:40:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:57.294 13:40:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:57.294 13:40:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:57.294 13:40:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:57.294 13:40:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:57.294 13:40:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:57.294 13:40:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:57.294 13:40:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.294 13:40:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:57.294 13:40:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.294 13:40:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:57.294 "name": "raid_bdev1", 00:20:57.294 "uuid": "e5a158d3-3130-4614-8a56-b805e70ff902", 00:20:57.294 "strip_size_kb": 64, 00:20:57.294 "state": "online", 00:20:57.294 "raid_level": "raid5f", 00:20:57.294 "superblock": true, 00:20:57.294 "num_base_bdevs": 3, 00:20:57.294 "num_base_bdevs_discovered": 3, 00:20:57.294 "num_base_bdevs_operational": 3, 00:20:57.294 "process": { 00:20:57.294 "type": "rebuild", 00:20:57.294 "target": "spare", 00:20:57.294 "progress": { 00:20:57.294 "blocks": 20480, 00:20:57.294 "percent": 16 00:20:57.294 } 00:20:57.294 }, 00:20:57.294 "base_bdevs_list": [ 00:20:57.294 { 00:20:57.294 "name": "spare", 00:20:57.294 "uuid": "95ceeebe-1196-59df-9263-61633402193a", 00:20:57.294 "is_configured": true, 00:20:57.294 "data_offset": 2048, 00:20:57.294 "data_size": 63488 00:20:57.294 }, 00:20:57.294 { 00:20:57.294 "name": "BaseBdev2", 00:20:57.294 "uuid": "67a3a48f-b505-54d7-98c6-b50d8c078b14", 00:20:57.294 "is_configured": true, 00:20:57.294 "data_offset": 2048, 00:20:57.294 "data_size": 63488 00:20:57.294 }, 00:20:57.294 { 00:20:57.294 "name": "BaseBdev3", 00:20:57.294 "uuid": "e9016150-a0e0-5579-80ff-680b4057dc06", 00:20:57.294 "is_configured": true, 00:20:57.294 "data_offset": 2048, 00:20:57.294 "data_size": 63488 00:20:57.294 } 00:20:57.294 ] 00:20:57.294 }' 00:20:57.294 13:40:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:57.552 13:40:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:57.552 13:40:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:57.552 13:40:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:57.552 13:40:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:20:57.552 13:40:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:20:57.552 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:20:57.552 13:40:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 00:20:57.552 13:40:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:20:57.552 13:40:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=564 00:20:57.552 13:40:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:57.552 13:40:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:57.552 13:40:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:57.552 13:40:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:57.552 13:40:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:57.552 13:40:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:57.552 13:40:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:57.552 13:40:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:57.552 13:40:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.552 13:40:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:57.552 13:40:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.552 13:40:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:57.552 "name": "raid_bdev1", 00:20:57.552 "uuid": "e5a158d3-3130-4614-8a56-b805e70ff902", 00:20:57.552 "strip_size_kb": 64, 00:20:57.552 "state": "online", 00:20:57.552 "raid_level": "raid5f", 00:20:57.552 "superblock": true, 00:20:57.552 "num_base_bdevs": 3, 00:20:57.552 "num_base_bdevs_discovered": 3, 00:20:57.552 "num_base_bdevs_operational": 3, 00:20:57.552 "process": { 00:20:57.552 "type": "rebuild", 00:20:57.552 "target": "spare", 00:20:57.552 "progress": { 00:20:57.552 "blocks": 22528, 00:20:57.552 "percent": 17 00:20:57.552 } 00:20:57.552 }, 00:20:57.552 "base_bdevs_list": [ 00:20:57.552 { 00:20:57.552 "name": "spare", 00:20:57.552 "uuid": "95ceeebe-1196-59df-9263-61633402193a", 00:20:57.552 "is_configured": true, 00:20:57.552 "data_offset": 2048, 00:20:57.552 "data_size": 63488 00:20:57.552 }, 00:20:57.552 { 00:20:57.552 "name": "BaseBdev2", 00:20:57.552 "uuid": "67a3a48f-b505-54d7-98c6-b50d8c078b14", 00:20:57.552 "is_configured": true, 00:20:57.552 "data_offset": 2048, 00:20:57.552 "data_size": 63488 00:20:57.552 }, 00:20:57.552 { 00:20:57.552 "name": "BaseBdev3", 00:20:57.552 "uuid": "e9016150-a0e0-5579-80ff-680b4057dc06", 00:20:57.552 "is_configured": true, 00:20:57.552 "data_offset": 2048, 00:20:57.552 "data_size": 63488 00:20:57.553 } 00:20:57.553 ] 00:20:57.553 }' 00:20:57.553 13:40:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:57.553 13:40:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:57.553 13:40:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:57.553 13:40:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:57.553 13:40:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:58.928 13:40:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:58.928 13:40:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:58.928 13:40:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:58.928 13:40:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:58.928 13:40:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:58.928 13:40:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:58.928 13:40:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:58.928 13:40:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:58.928 13:40:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:58.928 13:40:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:58.928 13:40:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:58.929 13:40:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:58.929 "name": "raid_bdev1", 00:20:58.929 "uuid": "e5a158d3-3130-4614-8a56-b805e70ff902", 00:20:58.929 "strip_size_kb": 64, 00:20:58.929 "state": "online", 00:20:58.929 "raid_level": "raid5f", 00:20:58.929 "superblock": true, 00:20:58.929 "num_base_bdevs": 3, 00:20:58.929 "num_base_bdevs_discovered": 3, 00:20:58.929 "num_base_bdevs_operational": 3, 00:20:58.929 "process": { 00:20:58.929 "type": "rebuild", 00:20:58.929 "target": "spare", 00:20:58.929 "progress": { 00:20:58.929 "blocks": 45056, 00:20:58.929 "percent": 35 00:20:58.929 } 00:20:58.929 }, 00:20:58.929 "base_bdevs_list": [ 00:20:58.929 { 00:20:58.929 "name": "spare", 00:20:58.929 "uuid": "95ceeebe-1196-59df-9263-61633402193a", 00:20:58.929 "is_configured": true, 00:20:58.929 "data_offset": 2048, 00:20:58.929 "data_size": 63488 00:20:58.929 }, 00:20:58.929 { 00:20:58.929 "name": "BaseBdev2", 00:20:58.929 "uuid": "67a3a48f-b505-54d7-98c6-b50d8c078b14", 00:20:58.929 "is_configured": true, 00:20:58.929 "data_offset": 2048, 00:20:58.929 "data_size": 63488 00:20:58.929 }, 00:20:58.929 { 00:20:58.929 "name": "BaseBdev3", 00:20:58.929 "uuid": "e9016150-a0e0-5579-80ff-680b4057dc06", 00:20:58.929 "is_configured": true, 00:20:58.929 "data_offset": 2048, 00:20:58.929 "data_size": 63488 00:20:58.929 } 00:20:58.929 ] 00:20:58.929 }' 00:20:58.929 13:40:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:58.929 13:40:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:58.929 13:40:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:58.929 13:40:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:58.929 13:40:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:59.865 13:40:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:59.865 13:40:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:59.865 13:40:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:59.865 13:40:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:59.865 13:40:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:59.865 13:40:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:59.865 13:40:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:59.865 13:40:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:59.865 13:40:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.865 13:40:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:59.865 13:40:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.865 13:40:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:59.865 "name": "raid_bdev1", 00:20:59.865 "uuid": "e5a158d3-3130-4614-8a56-b805e70ff902", 00:20:59.865 "strip_size_kb": 64, 00:20:59.865 "state": "online", 00:20:59.865 "raid_level": "raid5f", 00:20:59.865 "superblock": true, 00:20:59.865 "num_base_bdevs": 3, 00:20:59.865 "num_base_bdevs_discovered": 3, 00:20:59.865 "num_base_bdevs_operational": 3, 00:20:59.865 "process": { 00:20:59.865 "type": "rebuild", 00:20:59.865 "target": "spare", 00:20:59.865 "progress": { 00:20:59.865 "blocks": 67584, 00:20:59.865 "percent": 53 00:20:59.865 } 00:20:59.865 }, 00:20:59.865 "base_bdevs_list": [ 00:20:59.865 { 00:20:59.865 "name": "spare", 00:20:59.865 "uuid": "95ceeebe-1196-59df-9263-61633402193a", 00:20:59.865 "is_configured": true, 00:20:59.865 "data_offset": 2048, 00:20:59.865 "data_size": 63488 00:20:59.865 }, 00:20:59.865 { 00:20:59.865 "name": "BaseBdev2", 00:20:59.865 "uuid": "67a3a48f-b505-54d7-98c6-b50d8c078b14", 00:20:59.865 "is_configured": true, 00:20:59.865 "data_offset": 2048, 00:20:59.865 "data_size": 63488 00:20:59.865 }, 00:20:59.865 { 00:20:59.865 "name": "BaseBdev3", 00:20:59.865 "uuid": "e9016150-a0e0-5579-80ff-680b4057dc06", 00:20:59.865 "is_configured": true, 00:20:59.865 "data_offset": 2048, 00:20:59.866 "data_size": 63488 00:20:59.866 } 00:20:59.866 ] 00:20:59.866 }' 00:20:59.866 13:40:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:59.866 13:40:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:59.866 13:40:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:59.866 13:40:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:59.866 13:40:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:21:00.801 13:41:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:21:00.801 13:41:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:00.801 13:41:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:00.801 13:41:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:00.801 13:41:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:00.801 13:41:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:00.801 13:41:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:00.801 13:41:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:00.801 13:41:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:00.801 13:41:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:01.059 13:41:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.059 13:41:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:01.059 "name": "raid_bdev1", 00:21:01.059 "uuid": "e5a158d3-3130-4614-8a56-b805e70ff902", 00:21:01.059 "strip_size_kb": 64, 00:21:01.059 "state": "online", 00:21:01.059 "raid_level": "raid5f", 00:21:01.059 "superblock": true, 00:21:01.059 "num_base_bdevs": 3, 00:21:01.059 "num_base_bdevs_discovered": 3, 00:21:01.059 "num_base_bdevs_operational": 3, 00:21:01.059 "process": { 00:21:01.059 "type": "rebuild", 00:21:01.059 "target": "spare", 00:21:01.059 "progress": { 00:21:01.059 "blocks": 92160, 00:21:01.059 "percent": 72 00:21:01.059 } 00:21:01.059 }, 00:21:01.059 "base_bdevs_list": [ 00:21:01.059 { 00:21:01.059 "name": "spare", 00:21:01.059 "uuid": "95ceeebe-1196-59df-9263-61633402193a", 00:21:01.059 "is_configured": true, 00:21:01.059 "data_offset": 2048, 00:21:01.059 "data_size": 63488 00:21:01.059 }, 00:21:01.059 { 00:21:01.059 "name": "BaseBdev2", 00:21:01.059 "uuid": "67a3a48f-b505-54d7-98c6-b50d8c078b14", 00:21:01.059 "is_configured": true, 00:21:01.059 "data_offset": 2048, 00:21:01.059 "data_size": 63488 00:21:01.059 }, 00:21:01.059 { 00:21:01.059 "name": "BaseBdev3", 00:21:01.059 "uuid": "e9016150-a0e0-5579-80ff-680b4057dc06", 00:21:01.059 "is_configured": true, 00:21:01.059 "data_offset": 2048, 00:21:01.059 "data_size": 63488 00:21:01.059 } 00:21:01.059 ] 00:21:01.059 }' 00:21:01.059 13:41:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:01.059 13:41:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:01.059 13:41:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:01.059 13:41:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:01.059 13:41:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:21:02.060 13:41:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:21:02.060 13:41:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:02.060 13:41:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:02.060 13:41:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:02.060 13:41:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:02.060 13:41:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:02.060 13:41:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:02.060 13:41:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:02.060 13:41:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.060 13:41:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:02.060 13:41:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.060 13:41:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:02.060 "name": "raid_bdev1", 00:21:02.060 "uuid": "e5a158d3-3130-4614-8a56-b805e70ff902", 00:21:02.060 "strip_size_kb": 64, 00:21:02.060 "state": "online", 00:21:02.060 "raid_level": "raid5f", 00:21:02.060 "superblock": true, 00:21:02.060 "num_base_bdevs": 3, 00:21:02.060 "num_base_bdevs_discovered": 3, 00:21:02.060 "num_base_bdevs_operational": 3, 00:21:02.060 "process": { 00:21:02.060 "type": "rebuild", 00:21:02.060 "target": "spare", 00:21:02.060 "progress": { 00:21:02.060 "blocks": 114688, 00:21:02.060 "percent": 90 00:21:02.060 } 00:21:02.060 }, 00:21:02.060 "base_bdevs_list": [ 00:21:02.060 { 00:21:02.060 "name": "spare", 00:21:02.060 "uuid": "95ceeebe-1196-59df-9263-61633402193a", 00:21:02.060 "is_configured": true, 00:21:02.060 "data_offset": 2048, 00:21:02.060 "data_size": 63488 00:21:02.060 }, 00:21:02.060 { 00:21:02.060 "name": "BaseBdev2", 00:21:02.060 "uuid": "67a3a48f-b505-54d7-98c6-b50d8c078b14", 00:21:02.060 "is_configured": true, 00:21:02.060 "data_offset": 2048, 00:21:02.060 "data_size": 63488 00:21:02.060 }, 00:21:02.060 { 00:21:02.060 "name": "BaseBdev3", 00:21:02.060 "uuid": "e9016150-a0e0-5579-80ff-680b4057dc06", 00:21:02.060 "is_configured": true, 00:21:02.060 "data_offset": 2048, 00:21:02.060 "data_size": 63488 00:21:02.060 } 00:21:02.060 ] 00:21:02.060 }' 00:21:02.060 13:41:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:02.060 13:41:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:02.060 13:41:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:02.060 13:41:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:02.060 13:41:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:21:02.633 [2024-11-20 13:41:01.958249] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:21:02.633 [2024-11-20 13:41:01.958398] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:21:02.633 [2024-11-20 13:41:01.958575] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:03.200 13:41:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:21:03.200 13:41:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:03.200 13:41:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:03.200 13:41:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:03.200 13:41:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:03.200 13:41:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:03.200 13:41:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:03.200 13:41:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:03.200 13:41:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:03.200 13:41:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:03.200 13:41:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:03.200 13:41:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:03.200 "name": "raid_bdev1", 00:21:03.200 "uuid": "e5a158d3-3130-4614-8a56-b805e70ff902", 00:21:03.200 "strip_size_kb": 64, 00:21:03.200 "state": "online", 00:21:03.200 "raid_level": "raid5f", 00:21:03.200 "superblock": true, 00:21:03.200 "num_base_bdevs": 3, 00:21:03.200 "num_base_bdevs_discovered": 3, 00:21:03.200 "num_base_bdevs_operational": 3, 00:21:03.200 "base_bdevs_list": [ 00:21:03.200 { 00:21:03.200 "name": "spare", 00:21:03.200 "uuid": "95ceeebe-1196-59df-9263-61633402193a", 00:21:03.200 "is_configured": true, 00:21:03.200 "data_offset": 2048, 00:21:03.200 "data_size": 63488 00:21:03.200 }, 00:21:03.200 { 00:21:03.200 "name": "BaseBdev2", 00:21:03.200 "uuid": "67a3a48f-b505-54d7-98c6-b50d8c078b14", 00:21:03.200 "is_configured": true, 00:21:03.200 "data_offset": 2048, 00:21:03.200 "data_size": 63488 00:21:03.200 }, 00:21:03.200 { 00:21:03.200 "name": "BaseBdev3", 00:21:03.200 "uuid": "e9016150-a0e0-5579-80ff-680b4057dc06", 00:21:03.200 "is_configured": true, 00:21:03.200 "data_offset": 2048, 00:21:03.200 "data_size": 63488 00:21:03.200 } 00:21:03.200 ] 00:21:03.200 }' 00:21:03.200 13:41:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:03.200 13:41:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:21:03.200 13:41:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:03.200 13:41:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:21:03.200 13:41:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:21:03.200 13:41:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:03.200 13:41:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:03.200 13:41:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:03.200 13:41:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:03.200 13:41:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:03.460 13:41:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:03.460 13:41:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:03.460 13:41:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:03.460 13:41:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:03.460 13:41:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:03.460 13:41:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:03.460 "name": "raid_bdev1", 00:21:03.460 "uuid": "e5a158d3-3130-4614-8a56-b805e70ff902", 00:21:03.460 "strip_size_kb": 64, 00:21:03.460 "state": "online", 00:21:03.460 "raid_level": "raid5f", 00:21:03.460 "superblock": true, 00:21:03.460 "num_base_bdevs": 3, 00:21:03.460 "num_base_bdevs_discovered": 3, 00:21:03.460 "num_base_bdevs_operational": 3, 00:21:03.460 "base_bdevs_list": [ 00:21:03.460 { 00:21:03.460 "name": "spare", 00:21:03.460 "uuid": "95ceeebe-1196-59df-9263-61633402193a", 00:21:03.460 "is_configured": true, 00:21:03.460 "data_offset": 2048, 00:21:03.460 "data_size": 63488 00:21:03.460 }, 00:21:03.460 { 00:21:03.460 "name": "BaseBdev2", 00:21:03.460 "uuid": "67a3a48f-b505-54d7-98c6-b50d8c078b14", 00:21:03.460 "is_configured": true, 00:21:03.460 "data_offset": 2048, 00:21:03.460 "data_size": 63488 00:21:03.460 }, 00:21:03.460 { 00:21:03.460 "name": "BaseBdev3", 00:21:03.460 "uuid": "e9016150-a0e0-5579-80ff-680b4057dc06", 00:21:03.460 "is_configured": true, 00:21:03.460 "data_offset": 2048, 00:21:03.460 "data_size": 63488 00:21:03.460 } 00:21:03.460 ] 00:21:03.460 }' 00:21:03.460 13:41:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:03.460 13:41:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:03.460 13:41:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:03.460 13:41:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:03.460 13:41:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:21:03.460 13:41:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:03.460 13:41:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:03.460 13:41:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:03.460 13:41:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:03.460 13:41:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:03.460 13:41:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:03.460 13:41:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:03.460 13:41:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:03.460 13:41:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:03.460 13:41:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:03.460 13:41:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:03.460 13:41:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:03.460 13:41:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:03.460 13:41:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:03.460 13:41:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:03.460 "name": "raid_bdev1", 00:21:03.460 "uuid": "e5a158d3-3130-4614-8a56-b805e70ff902", 00:21:03.460 "strip_size_kb": 64, 00:21:03.460 "state": "online", 00:21:03.460 "raid_level": "raid5f", 00:21:03.460 "superblock": true, 00:21:03.460 "num_base_bdevs": 3, 00:21:03.460 "num_base_bdevs_discovered": 3, 00:21:03.460 "num_base_bdevs_operational": 3, 00:21:03.460 "base_bdevs_list": [ 00:21:03.460 { 00:21:03.460 "name": "spare", 00:21:03.460 "uuid": "95ceeebe-1196-59df-9263-61633402193a", 00:21:03.460 "is_configured": true, 00:21:03.460 "data_offset": 2048, 00:21:03.460 "data_size": 63488 00:21:03.460 }, 00:21:03.460 { 00:21:03.460 "name": "BaseBdev2", 00:21:03.460 "uuid": "67a3a48f-b505-54d7-98c6-b50d8c078b14", 00:21:03.460 "is_configured": true, 00:21:03.460 "data_offset": 2048, 00:21:03.460 "data_size": 63488 00:21:03.460 }, 00:21:03.460 { 00:21:03.460 "name": "BaseBdev3", 00:21:03.460 "uuid": "e9016150-a0e0-5579-80ff-680b4057dc06", 00:21:03.460 "is_configured": true, 00:21:03.460 "data_offset": 2048, 00:21:03.460 "data_size": 63488 00:21:03.460 } 00:21:03.460 ] 00:21:03.460 }' 00:21:03.460 13:41:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:03.460 13:41:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:04.029 13:41:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:21:04.029 13:41:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:04.029 13:41:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:04.029 [2024-11-20 13:41:03.250383] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:04.029 [2024-11-20 13:41:03.250429] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:04.029 [2024-11-20 13:41:03.250524] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:04.029 [2024-11-20 13:41:03.250614] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:04.029 [2024-11-20 13:41:03.250633] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:21:04.029 13:41:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:04.029 13:41:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:04.029 13:41:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:04.029 13:41:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:21:04.029 13:41:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:04.029 13:41:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:04.029 13:41:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:21:04.029 13:41:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:21:04.029 13:41:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:21:04.029 13:41:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:21:04.029 13:41:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:21:04.029 13:41:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:21:04.029 13:41:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:04.029 13:41:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:21:04.029 13:41:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:04.029 13:41:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:21:04.029 13:41:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:04.029 13:41:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:21:04.029 13:41:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:21:04.288 /dev/nbd0 00:21:04.288 13:41:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:21:04.288 13:41:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:21:04.288 13:41:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:21:04.288 13:41:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:21:04.288 13:41:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:21:04.288 13:41:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:21:04.288 13:41:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:21:04.288 13:41:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:21:04.288 13:41:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:21:04.288 13:41:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:21:04.288 13:41:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:04.288 1+0 records in 00:21:04.288 1+0 records out 00:21:04.288 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000419163 s, 9.8 MB/s 00:21:04.288 13:41:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:04.288 13:41:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:21:04.288 13:41:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:04.288 13:41:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:21:04.288 13:41:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:21:04.288 13:41:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:04.288 13:41:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:21:04.288 13:41:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:21:04.547 /dev/nbd1 00:21:04.547 13:41:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:21:04.547 13:41:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:21:04.547 13:41:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:21:04.547 13:41:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:21:04.547 13:41:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:21:04.547 13:41:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:21:04.547 13:41:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:21:04.547 13:41:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:21:04.547 13:41:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:21:04.547 13:41:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:21:04.547 13:41:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:04.547 1+0 records in 00:21:04.547 1+0 records out 00:21:04.547 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000437206 s, 9.4 MB/s 00:21:04.547 13:41:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:04.547 13:41:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:21:04.547 13:41:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:04.547 13:41:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:21:04.547 13:41:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:21:04.547 13:41:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:04.547 13:41:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:21:04.547 13:41:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:21:04.807 13:41:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:21:04.807 13:41:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:21:04.807 13:41:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:21:04.807 13:41:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:04.807 13:41:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:21:04.807 13:41:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:04.807 13:41:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:21:05.065 13:41:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:21:05.065 13:41:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:21:05.066 13:41:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:21:05.066 13:41:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:05.066 13:41:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:05.066 13:41:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:21:05.066 13:41:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:21:05.066 13:41:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:21:05.066 13:41:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:05.066 13:41:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:21:05.326 13:41:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:21:05.326 13:41:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:21:05.326 13:41:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:21:05.326 13:41:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:05.326 13:41:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:05.326 13:41:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:21:05.326 13:41:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:21:05.326 13:41:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:21:05.326 13:41:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:21:05.326 13:41:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:21:05.326 13:41:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.326 13:41:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:05.326 13:41:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.326 13:41:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:21:05.326 13:41:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.326 13:41:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:05.326 [2024-11-20 13:41:04.606269] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:21:05.326 [2024-11-20 13:41:04.606479] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:05.326 [2024-11-20 13:41:04.606546] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:21:05.326 [2024-11-20 13:41:04.606644] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:05.326 [2024-11-20 13:41:04.609654] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:05.326 [2024-11-20 13:41:04.609826] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:21:05.326 [2024-11-20 13:41:04.610072] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:21:05.326 [2024-11-20 13:41:04.610287] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:05.326 [2024-11-20 13:41:04.610562] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:05.326 [2024-11-20 13:41:04.610851] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:05.326 spare 00:21:05.326 13:41:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.326 13:41:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:21:05.326 13:41:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.326 13:41:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:05.326 [2024-11-20 13:41:04.710906] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:21:05.326 [2024-11-20 13:41:04.710960] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:21:05.326 [2024-11-20 13:41:04.711353] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000047700 00:21:05.326 [2024-11-20 13:41:04.717867] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:21:05.326 [2024-11-20 13:41:04.717891] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:21:05.326 [2024-11-20 13:41:04.718152] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:05.326 13:41:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.326 13:41:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:21:05.326 13:41:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:05.326 13:41:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:05.326 13:41:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:05.326 13:41:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:05.326 13:41:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:05.326 13:41:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:05.326 13:41:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:05.326 13:41:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:05.326 13:41:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:05.326 13:41:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:05.326 13:41:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.326 13:41:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:05.326 13:41:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:05.326 13:41:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.326 13:41:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:05.326 "name": "raid_bdev1", 00:21:05.326 "uuid": "e5a158d3-3130-4614-8a56-b805e70ff902", 00:21:05.326 "strip_size_kb": 64, 00:21:05.326 "state": "online", 00:21:05.326 "raid_level": "raid5f", 00:21:05.326 "superblock": true, 00:21:05.326 "num_base_bdevs": 3, 00:21:05.326 "num_base_bdevs_discovered": 3, 00:21:05.326 "num_base_bdevs_operational": 3, 00:21:05.326 "base_bdevs_list": [ 00:21:05.326 { 00:21:05.326 "name": "spare", 00:21:05.326 "uuid": "95ceeebe-1196-59df-9263-61633402193a", 00:21:05.326 "is_configured": true, 00:21:05.326 "data_offset": 2048, 00:21:05.326 "data_size": 63488 00:21:05.326 }, 00:21:05.326 { 00:21:05.326 "name": "BaseBdev2", 00:21:05.326 "uuid": "67a3a48f-b505-54d7-98c6-b50d8c078b14", 00:21:05.326 "is_configured": true, 00:21:05.326 "data_offset": 2048, 00:21:05.326 "data_size": 63488 00:21:05.326 }, 00:21:05.326 { 00:21:05.326 "name": "BaseBdev3", 00:21:05.327 "uuid": "e9016150-a0e0-5579-80ff-680b4057dc06", 00:21:05.327 "is_configured": true, 00:21:05.327 "data_offset": 2048, 00:21:05.327 "data_size": 63488 00:21:05.327 } 00:21:05.327 ] 00:21:05.327 }' 00:21:05.327 13:41:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:05.327 13:41:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:05.896 13:41:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:05.896 13:41:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:05.896 13:41:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:05.896 13:41:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:05.896 13:41:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:05.896 13:41:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:05.896 13:41:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.896 13:41:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:05.896 13:41:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:05.896 13:41:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.896 13:41:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:05.896 "name": "raid_bdev1", 00:21:05.896 "uuid": "e5a158d3-3130-4614-8a56-b805e70ff902", 00:21:05.896 "strip_size_kb": 64, 00:21:05.896 "state": "online", 00:21:05.896 "raid_level": "raid5f", 00:21:05.896 "superblock": true, 00:21:05.896 "num_base_bdevs": 3, 00:21:05.896 "num_base_bdevs_discovered": 3, 00:21:05.896 "num_base_bdevs_operational": 3, 00:21:05.896 "base_bdevs_list": [ 00:21:05.896 { 00:21:05.896 "name": "spare", 00:21:05.896 "uuid": "95ceeebe-1196-59df-9263-61633402193a", 00:21:05.896 "is_configured": true, 00:21:05.896 "data_offset": 2048, 00:21:05.896 "data_size": 63488 00:21:05.896 }, 00:21:05.896 { 00:21:05.896 "name": "BaseBdev2", 00:21:05.896 "uuid": "67a3a48f-b505-54d7-98c6-b50d8c078b14", 00:21:05.896 "is_configured": true, 00:21:05.896 "data_offset": 2048, 00:21:05.896 "data_size": 63488 00:21:05.896 }, 00:21:05.896 { 00:21:05.896 "name": "BaseBdev3", 00:21:05.896 "uuid": "e9016150-a0e0-5579-80ff-680b4057dc06", 00:21:05.896 "is_configured": true, 00:21:05.896 "data_offset": 2048, 00:21:05.896 "data_size": 63488 00:21:05.896 } 00:21:05.896 ] 00:21:05.896 }' 00:21:05.896 13:41:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:05.896 13:41:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:05.896 13:41:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:05.896 13:41:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:05.896 13:41:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:05.896 13:41:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:21:05.896 13:41:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.896 13:41:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:05.896 13:41:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.896 13:41:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:21:05.896 13:41:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:21:05.896 13:41:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.896 13:41:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:05.896 [2024-11-20 13:41:05.328872] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:05.896 13:41:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.896 13:41:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:21:05.896 13:41:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:05.896 13:41:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:05.896 13:41:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:05.896 13:41:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:05.896 13:41:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:05.896 13:41:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:05.896 13:41:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:05.897 13:41:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:05.897 13:41:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:05.897 13:41:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:05.897 13:41:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:05.897 13:41:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.897 13:41:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:05.897 13:41:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.156 13:41:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:06.156 "name": "raid_bdev1", 00:21:06.156 "uuid": "e5a158d3-3130-4614-8a56-b805e70ff902", 00:21:06.156 "strip_size_kb": 64, 00:21:06.156 "state": "online", 00:21:06.156 "raid_level": "raid5f", 00:21:06.156 "superblock": true, 00:21:06.156 "num_base_bdevs": 3, 00:21:06.156 "num_base_bdevs_discovered": 2, 00:21:06.156 "num_base_bdevs_operational": 2, 00:21:06.156 "base_bdevs_list": [ 00:21:06.156 { 00:21:06.156 "name": null, 00:21:06.156 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:06.156 "is_configured": false, 00:21:06.156 "data_offset": 0, 00:21:06.156 "data_size": 63488 00:21:06.156 }, 00:21:06.156 { 00:21:06.156 "name": "BaseBdev2", 00:21:06.156 "uuid": "67a3a48f-b505-54d7-98c6-b50d8c078b14", 00:21:06.156 "is_configured": true, 00:21:06.156 "data_offset": 2048, 00:21:06.156 "data_size": 63488 00:21:06.156 }, 00:21:06.156 { 00:21:06.156 "name": "BaseBdev3", 00:21:06.156 "uuid": "e9016150-a0e0-5579-80ff-680b4057dc06", 00:21:06.156 "is_configured": true, 00:21:06.156 "data_offset": 2048, 00:21:06.156 "data_size": 63488 00:21:06.156 } 00:21:06.156 ] 00:21:06.156 }' 00:21:06.156 13:41:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:06.156 13:41:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:06.415 13:41:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:21:06.415 13:41:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.415 13:41:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:06.415 [2024-11-20 13:41:05.776358] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:06.415 [2024-11-20 13:41:05.776745] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:21:06.415 [2024-11-20 13:41:05.776904] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:21:06.415 [2024-11-20 13:41:05.776971] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:06.415 [2024-11-20 13:41:05.795410] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000477d0 00:21:06.415 13:41:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.415 13:41:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:21:06.415 [2024-11-20 13:41:05.804561] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:07.423 13:41:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:07.423 13:41:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:07.423 13:41:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:07.423 13:41:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:07.423 13:41:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:07.423 13:41:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:07.423 13:41:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:07.423 13:41:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:07.423 13:41:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:07.423 13:41:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:07.423 13:41:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:07.423 "name": "raid_bdev1", 00:21:07.423 "uuid": "e5a158d3-3130-4614-8a56-b805e70ff902", 00:21:07.423 "strip_size_kb": 64, 00:21:07.423 "state": "online", 00:21:07.423 "raid_level": "raid5f", 00:21:07.423 "superblock": true, 00:21:07.423 "num_base_bdevs": 3, 00:21:07.423 "num_base_bdevs_discovered": 3, 00:21:07.423 "num_base_bdevs_operational": 3, 00:21:07.423 "process": { 00:21:07.423 "type": "rebuild", 00:21:07.423 "target": "spare", 00:21:07.423 "progress": { 00:21:07.423 "blocks": 20480, 00:21:07.423 "percent": 16 00:21:07.423 } 00:21:07.423 }, 00:21:07.423 "base_bdevs_list": [ 00:21:07.423 { 00:21:07.423 "name": "spare", 00:21:07.423 "uuid": "95ceeebe-1196-59df-9263-61633402193a", 00:21:07.424 "is_configured": true, 00:21:07.424 "data_offset": 2048, 00:21:07.424 "data_size": 63488 00:21:07.424 }, 00:21:07.424 { 00:21:07.424 "name": "BaseBdev2", 00:21:07.424 "uuid": "67a3a48f-b505-54d7-98c6-b50d8c078b14", 00:21:07.424 "is_configured": true, 00:21:07.424 "data_offset": 2048, 00:21:07.424 "data_size": 63488 00:21:07.424 }, 00:21:07.424 { 00:21:07.424 "name": "BaseBdev3", 00:21:07.424 "uuid": "e9016150-a0e0-5579-80ff-680b4057dc06", 00:21:07.424 "is_configured": true, 00:21:07.424 "data_offset": 2048, 00:21:07.424 "data_size": 63488 00:21:07.424 } 00:21:07.424 ] 00:21:07.424 }' 00:21:07.424 13:41:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:07.424 13:41:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:07.424 13:41:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:07.682 13:41:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:07.682 13:41:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:21:07.682 13:41:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:07.682 13:41:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:07.682 [2024-11-20 13:41:06.956003] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:07.682 [2024-11-20 13:41:07.014949] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:21:07.682 [2024-11-20 13:41:07.015029] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:07.682 [2024-11-20 13:41:07.015051] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:07.682 [2024-11-20 13:41:07.015082] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:21:07.682 13:41:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:07.682 13:41:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:21:07.682 13:41:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:07.682 13:41:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:07.682 13:41:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:07.682 13:41:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:07.682 13:41:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:07.682 13:41:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:07.682 13:41:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:07.682 13:41:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:07.682 13:41:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:07.682 13:41:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:07.682 13:41:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:07.682 13:41:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:07.682 13:41:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:07.682 13:41:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:07.682 13:41:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:07.682 "name": "raid_bdev1", 00:21:07.682 "uuid": "e5a158d3-3130-4614-8a56-b805e70ff902", 00:21:07.682 "strip_size_kb": 64, 00:21:07.682 "state": "online", 00:21:07.682 "raid_level": "raid5f", 00:21:07.682 "superblock": true, 00:21:07.682 "num_base_bdevs": 3, 00:21:07.682 "num_base_bdevs_discovered": 2, 00:21:07.682 "num_base_bdevs_operational": 2, 00:21:07.682 "base_bdevs_list": [ 00:21:07.682 { 00:21:07.682 "name": null, 00:21:07.682 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:07.682 "is_configured": false, 00:21:07.682 "data_offset": 0, 00:21:07.682 "data_size": 63488 00:21:07.682 }, 00:21:07.682 { 00:21:07.682 "name": "BaseBdev2", 00:21:07.682 "uuid": "67a3a48f-b505-54d7-98c6-b50d8c078b14", 00:21:07.682 "is_configured": true, 00:21:07.682 "data_offset": 2048, 00:21:07.682 "data_size": 63488 00:21:07.682 }, 00:21:07.682 { 00:21:07.682 "name": "BaseBdev3", 00:21:07.682 "uuid": "e9016150-a0e0-5579-80ff-680b4057dc06", 00:21:07.682 "is_configured": true, 00:21:07.682 "data_offset": 2048, 00:21:07.682 "data_size": 63488 00:21:07.682 } 00:21:07.682 ] 00:21:07.682 }' 00:21:07.682 13:41:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:07.682 13:41:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:08.249 13:41:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:21:08.249 13:41:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:08.249 13:41:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:08.249 [2024-11-20 13:41:07.473263] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:21:08.249 [2024-11-20 13:41:07.473490] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:08.249 [2024-11-20 13:41:07.473541] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780 00:21:08.249 [2024-11-20 13:41:07.473563] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:08.249 [2024-11-20 13:41:07.474134] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:08.249 [2024-11-20 13:41:07.474163] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:21:08.249 [2024-11-20 13:41:07.474297] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:21:08.249 [2024-11-20 13:41:07.474320] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:21:08.249 [2024-11-20 13:41:07.474333] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:21:08.249 [2024-11-20 13:41:07.474363] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:08.249 [2024-11-20 13:41:07.492580] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000478a0 00:21:08.249 spare 00:21:08.249 13:41:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:08.249 13:41:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:21:08.249 [2024-11-20 13:41:07.501449] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:09.185 13:41:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:09.185 13:41:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:09.185 13:41:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:09.185 13:41:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:09.185 13:41:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:09.185 13:41:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:09.185 13:41:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:09.185 13:41:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:09.185 13:41:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:09.185 13:41:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:09.185 13:41:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:09.185 "name": "raid_bdev1", 00:21:09.185 "uuid": "e5a158d3-3130-4614-8a56-b805e70ff902", 00:21:09.185 "strip_size_kb": 64, 00:21:09.185 "state": "online", 00:21:09.185 "raid_level": "raid5f", 00:21:09.185 "superblock": true, 00:21:09.185 "num_base_bdevs": 3, 00:21:09.185 "num_base_bdevs_discovered": 3, 00:21:09.185 "num_base_bdevs_operational": 3, 00:21:09.185 "process": { 00:21:09.185 "type": "rebuild", 00:21:09.185 "target": "spare", 00:21:09.185 "progress": { 00:21:09.185 "blocks": 20480, 00:21:09.185 "percent": 16 00:21:09.185 } 00:21:09.186 }, 00:21:09.186 "base_bdevs_list": [ 00:21:09.186 { 00:21:09.186 "name": "spare", 00:21:09.186 "uuid": "95ceeebe-1196-59df-9263-61633402193a", 00:21:09.186 "is_configured": true, 00:21:09.186 "data_offset": 2048, 00:21:09.186 "data_size": 63488 00:21:09.186 }, 00:21:09.186 { 00:21:09.186 "name": "BaseBdev2", 00:21:09.186 "uuid": "67a3a48f-b505-54d7-98c6-b50d8c078b14", 00:21:09.186 "is_configured": true, 00:21:09.186 "data_offset": 2048, 00:21:09.186 "data_size": 63488 00:21:09.186 }, 00:21:09.186 { 00:21:09.186 "name": "BaseBdev3", 00:21:09.186 "uuid": "e9016150-a0e0-5579-80ff-680b4057dc06", 00:21:09.186 "is_configured": true, 00:21:09.186 "data_offset": 2048, 00:21:09.186 "data_size": 63488 00:21:09.186 } 00:21:09.186 ] 00:21:09.186 }' 00:21:09.186 13:41:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:09.186 13:41:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:09.186 13:41:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:09.186 13:41:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:09.186 13:41:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:21:09.186 13:41:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:09.186 13:41:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:09.186 [2024-11-20 13:41:08.649244] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:09.445 [2024-11-20 13:41:08.711215] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:21:09.445 [2024-11-20 13:41:08.711444] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:09.445 [2024-11-20 13:41:08.711617] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:09.445 [2024-11-20 13:41:08.711660] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:21:09.445 13:41:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:09.445 13:41:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:21:09.445 13:41:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:09.445 13:41:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:09.445 13:41:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:09.445 13:41:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:09.445 13:41:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:09.445 13:41:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:09.445 13:41:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:09.445 13:41:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:09.445 13:41:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:09.445 13:41:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:09.445 13:41:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:09.445 13:41:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:09.445 13:41:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:09.445 13:41:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:09.445 13:41:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:09.445 "name": "raid_bdev1", 00:21:09.445 "uuid": "e5a158d3-3130-4614-8a56-b805e70ff902", 00:21:09.445 "strip_size_kb": 64, 00:21:09.445 "state": "online", 00:21:09.445 "raid_level": "raid5f", 00:21:09.445 "superblock": true, 00:21:09.445 "num_base_bdevs": 3, 00:21:09.445 "num_base_bdevs_discovered": 2, 00:21:09.445 "num_base_bdevs_operational": 2, 00:21:09.445 "base_bdevs_list": [ 00:21:09.445 { 00:21:09.445 "name": null, 00:21:09.445 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:09.445 "is_configured": false, 00:21:09.445 "data_offset": 0, 00:21:09.445 "data_size": 63488 00:21:09.445 }, 00:21:09.445 { 00:21:09.445 "name": "BaseBdev2", 00:21:09.445 "uuid": "67a3a48f-b505-54d7-98c6-b50d8c078b14", 00:21:09.445 "is_configured": true, 00:21:09.445 "data_offset": 2048, 00:21:09.445 "data_size": 63488 00:21:09.445 }, 00:21:09.445 { 00:21:09.445 "name": "BaseBdev3", 00:21:09.445 "uuid": "e9016150-a0e0-5579-80ff-680b4057dc06", 00:21:09.445 "is_configured": true, 00:21:09.445 "data_offset": 2048, 00:21:09.445 "data_size": 63488 00:21:09.445 } 00:21:09.445 ] 00:21:09.445 }' 00:21:09.445 13:41:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:09.445 13:41:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:10.013 13:41:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:10.013 13:41:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:10.013 13:41:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:10.013 13:41:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:10.013 13:41:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:10.013 13:41:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:10.013 13:41:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:10.013 13:41:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:10.013 13:41:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:10.013 13:41:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:10.013 13:41:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:10.013 "name": "raid_bdev1", 00:21:10.013 "uuid": "e5a158d3-3130-4614-8a56-b805e70ff902", 00:21:10.013 "strip_size_kb": 64, 00:21:10.013 "state": "online", 00:21:10.013 "raid_level": "raid5f", 00:21:10.013 "superblock": true, 00:21:10.013 "num_base_bdevs": 3, 00:21:10.013 "num_base_bdevs_discovered": 2, 00:21:10.013 "num_base_bdevs_operational": 2, 00:21:10.013 "base_bdevs_list": [ 00:21:10.013 { 00:21:10.013 "name": null, 00:21:10.013 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:10.013 "is_configured": false, 00:21:10.013 "data_offset": 0, 00:21:10.013 "data_size": 63488 00:21:10.013 }, 00:21:10.013 { 00:21:10.013 "name": "BaseBdev2", 00:21:10.013 "uuid": "67a3a48f-b505-54d7-98c6-b50d8c078b14", 00:21:10.014 "is_configured": true, 00:21:10.014 "data_offset": 2048, 00:21:10.014 "data_size": 63488 00:21:10.014 }, 00:21:10.014 { 00:21:10.014 "name": "BaseBdev3", 00:21:10.014 "uuid": "e9016150-a0e0-5579-80ff-680b4057dc06", 00:21:10.014 "is_configured": true, 00:21:10.014 "data_offset": 2048, 00:21:10.014 "data_size": 63488 00:21:10.014 } 00:21:10.014 ] 00:21:10.014 }' 00:21:10.014 13:41:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:10.014 13:41:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:10.014 13:41:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:10.014 13:41:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:10.014 13:41:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:21:10.014 13:41:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:10.014 13:41:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:10.014 13:41:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:10.014 13:41:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:21:10.014 13:41:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:10.014 13:41:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:10.014 [2024-11-20 13:41:09.382952] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:21:10.014 [2024-11-20 13:41:09.383017] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:10.014 [2024-11-20 13:41:09.383046] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:21:10.014 [2024-11-20 13:41:09.383070] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:10.014 [2024-11-20 13:41:09.383530] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:10.014 [2024-11-20 13:41:09.383674] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:21:10.014 [2024-11-20 13:41:09.383784] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:21:10.014 [2024-11-20 13:41:09.383804] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:21:10.014 [2024-11-20 13:41:09.383832] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:21:10.014 [2024-11-20 13:41:09.383844] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:21:10.014 BaseBdev1 00:21:10.014 13:41:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:10.014 13:41:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:21:10.949 13:41:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:21:10.949 13:41:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:10.949 13:41:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:10.949 13:41:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:10.950 13:41:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:10.950 13:41:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:10.950 13:41:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:10.950 13:41:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:10.950 13:41:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:10.950 13:41:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:10.950 13:41:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:10.950 13:41:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:10.950 13:41:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:10.950 13:41:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:10.950 13:41:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.208 13:41:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:11.208 "name": "raid_bdev1", 00:21:11.208 "uuid": "e5a158d3-3130-4614-8a56-b805e70ff902", 00:21:11.208 "strip_size_kb": 64, 00:21:11.208 "state": "online", 00:21:11.208 "raid_level": "raid5f", 00:21:11.208 "superblock": true, 00:21:11.208 "num_base_bdevs": 3, 00:21:11.208 "num_base_bdevs_discovered": 2, 00:21:11.208 "num_base_bdevs_operational": 2, 00:21:11.208 "base_bdevs_list": [ 00:21:11.208 { 00:21:11.208 "name": null, 00:21:11.208 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:11.208 "is_configured": false, 00:21:11.208 "data_offset": 0, 00:21:11.208 "data_size": 63488 00:21:11.208 }, 00:21:11.208 { 00:21:11.208 "name": "BaseBdev2", 00:21:11.208 "uuid": "67a3a48f-b505-54d7-98c6-b50d8c078b14", 00:21:11.208 "is_configured": true, 00:21:11.208 "data_offset": 2048, 00:21:11.208 "data_size": 63488 00:21:11.208 }, 00:21:11.208 { 00:21:11.208 "name": "BaseBdev3", 00:21:11.208 "uuid": "e9016150-a0e0-5579-80ff-680b4057dc06", 00:21:11.208 "is_configured": true, 00:21:11.208 "data_offset": 2048, 00:21:11.208 "data_size": 63488 00:21:11.208 } 00:21:11.208 ] 00:21:11.208 }' 00:21:11.208 13:41:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:11.208 13:41:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:11.467 13:41:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:11.467 13:41:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:11.467 13:41:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:11.467 13:41:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:11.467 13:41:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:11.467 13:41:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:11.467 13:41:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:11.467 13:41:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.467 13:41:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:11.467 13:41:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.467 13:41:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:11.467 "name": "raid_bdev1", 00:21:11.468 "uuid": "e5a158d3-3130-4614-8a56-b805e70ff902", 00:21:11.468 "strip_size_kb": 64, 00:21:11.468 "state": "online", 00:21:11.468 "raid_level": "raid5f", 00:21:11.468 "superblock": true, 00:21:11.468 "num_base_bdevs": 3, 00:21:11.468 "num_base_bdevs_discovered": 2, 00:21:11.468 "num_base_bdevs_operational": 2, 00:21:11.468 "base_bdevs_list": [ 00:21:11.468 { 00:21:11.468 "name": null, 00:21:11.468 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:11.468 "is_configured": false, 00:21:11.468 "data_offset": 0, 00:21:11.468 "data_size": 63488 00:21:11.468 }, 00:21:11.468 { 00:21:11.468 "name": "BaseBdev2", 00:21:11.468 "uuid": "67a3a48f-b505-54d7-98c6-b50d8c078b14", 00:21:11.468 "is_configured": true, 00:21:11.468 "data_offset": 2048, 00:21:11.468 "data_size": 63488 00:21:11.468 }, 00:21:11.468 { 00:21:11.468 "name": "BaseBdev3", 00:21:11.468 "uuid": "e9016150-a0e0-5579-80ff-680b4057dc06", 00:21:11.468 "is_configured": true, 00:21:11.468 "data_offset": 2048, 00:21:11.468 "data_size": 63488 00:21:11.468 } 00:21:11.468 ] 00:21:11.468 }' 00:21:11.468 13:41:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:11.468 13:41:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:11.468 13:41:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:11.727 13:41:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:11.727 13:41:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:21:11.727 13:41:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:21:11.727 13:41:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:21:11.727 13:41:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:21:11.727 13:41:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:11.727 13:41:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:21:11.727 13:41:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:11.727 13:41:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:21:11.727 13:41:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.727 13:41:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:11.727 [2024-11-20 13:41:10.985667] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:11.727 [2024-11-20 13:41:10.985967] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:21:11.727 [2024-11-20 13:41:10.986099] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:21:11.727 request: 00:21:11.727 { 00:21:11.727 "base_bdev": "BaseBdev1", 00:21:11.727 "raid_bdev": "raid_bdev1", 00:21:11.727 "method": "bdev_raid_add_base_bdev", 00:21:11.727 "req_id": 1 00:21:11.727 } 00:21:11.727 Got JSON-RPC error response 00:21:11.727 response: 00:21:11.727 { 00:21:11.727 "code": -22, 00:21:11.727 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:21:11.727 } 00:21:11.727 13:41:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:21:11.727 13:41:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:21:11.727 13:41:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:11.727 13:41:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:11.727 13:41:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:11.727 13:41:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:21:12.665 13:41:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:21:12.665 13:41:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:12.665 13:41:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:12.665 13:41:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:12.665 13:41:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:12.665 13:41:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:12.665 13:41:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:12.665 13:41:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:12.665 13:41:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:12.665 13:41:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:12.665 13:41:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:12.665 13:41:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:12.665 13:41:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.665 13:41:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:12.665 13:41:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.665 13:41:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:12.665 "name": "raid_bdev1", 00:21:12.665 "uuid": "e5a158d3-3130-4614-8a56-b805e70ff902", 00:21:12.665 "strip_size_kb": 64, 00:21:12.665 "state": "online", 00:21:12.665 "raid_level": "raid5f", 00:21:12.665 "superblock": true, 00:21:12.665 "num_base_bdevs": 3, 00:21:12.665 "num_base_bdevs_discovered": 2, 00:21:12.665 "num_base_bdevs_operational": 2, 00:21:12.665 "base_bdevs_list": [ 00:21:12.665 { 00:21:12.665 "name": null, 00:21:12.665 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:12.665 "is_configured": false, 00:21:12.665 "data_offset": 0, 00:21:12.665 "data_size": 63488 00:21:12.665 }, 00:21:12.665 { 00:21:12.665 "name": "BaseBdev2", 00:21:12.665 "uuid": "67a3a48f-b505-54d7-98c6-b50d8c078b14", 00:21:12.665 "is_configured": true, 00:21:12.665 "data_offset": 2048, 00:21:12.665 "data_size": 63488 00:21:12.665 }, 00:21:12.665 { 00:21:12.665 "name": "BaseBdev3", 00:21:12.665 "uuid": "e9016150-a0e0-5579-80ff-680b4057dc06", 00:21:12.665 "is_configured": true, 00:21:12.665 "data_offset": 2048, 00:21:12.665 "data_size": 63488 00:21:12.665 } 00:21:12.665 ] 00:21:12.665 }' 00:21:12.665 13:41:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:12.665 13:41:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:13.234 13:41:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:13.234 13:41:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:13.234 13:41:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:13.234 13:41:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:13.234 13:41:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:13.234 13:41:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:13.234 13:41:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:13.234 13:41:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.234 13:41:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:13.234 13:41:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:13.234 13:41:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:13.234 "name": "raid_bdev1", 00:21:13.234 "uuid": "e5a158d3-3130-4614-8a56-b805e70ff902", 00:21:13.234 "strip_size_kb": 64, 00:21:13.234 "state": "online", 00:21:13.234 "raid_level": "raid5f", 00:21:13.234 "superblock": true, 00:21:13.234 "num_base_bdevs": 3, 00:21:13.234 "num_base_bdevs_discovered": 2, 00:21:13.234 "num_base_bdevs_operational": 2, 00:21:13.234 "base_bdevs_list": [ 00:21:13.234 { 00:21:13.234 "name": null, 00:21:13.234 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:13.234 "is_configured": false, 00:21:13.234 "data_offset": 0, 00:21:13.234 "data_size": 63488 00:21:13.234 }, 00:21:13.234 { 00:21:13.234 "name": "BaseBdev2", 00:21:13.234 "uuid": "67a3a48f-b505-54d7-98c6-b50d8c078b14", 00:21:13.234 "is_configured": true, 00:21:13.234 "data_offset": 2048, 00:21:13.234 "data_size": 63488 00:21:13.234 }, 00:21:13.234 { 00:21:13.234 "name": "BaseBdev3", 00:21:13.234 "uuid": "e9016150-a0e0-5579-80ff-680b4057dc06", 00:21:13.234 "is_configured": true, 00:21:13.234 "data_offset": 2048, 00:21:13.234 "data_size": 63488 00:21:13.234 } 00:21:13.234 ] 00:21:13.234 }' 00:21:13.234 13:41:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:13.234 13:41:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:13.234 13:41:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:13.234 13:41:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:13.234 13:41:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 81814 00:21:13.234 13:41:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 81814 ']' 00:21:13.234 13:41:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 81814 00:21:13.234 13:41:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:21:13.234 13:41:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:13.234 13:41:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81814 00:21:13.234 killing process with pid 81814 00:21:13.234 Received shutdown signal, test time was about 60.000000 seconds 00:21:13.234 00:21:13.234 Latency(us) 00:21:13.234 [2024-11-20T13:41:12.719Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:13.234 [2024-11-20T13:41:12.719Z] =================================================================================================================== 00:21:13.234 [2024-11-20T13:41:12.719Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:13.234 13:41:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:13.234 13:41:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:13.234 13:41:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81814' 00:21:13.234 13:41:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 81814 00:21:13.234 [2024-11-20 13:41:12.613275] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:13.234 13:41:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 81814 00:21:13.234 [2024-11-20 13:41:12.613410] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:13.234 [2024-11-20 13:41:12.613479] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:13.234 [2024-11-20 13:41:12.613496] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:21:13.802 [2024-11-20 13:41:13.058509] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:15.179 13:41:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:21:15.179 00:21:15.179 real 0m23.688s 00:21:15.179 user 0m30.084s 00:21:15.179 sys 0m3.323s 00:21:15.179 13:41:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:15.179 13:41:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:15.179 ************************************ 00:21:15.179 END TEST raid5f_rebuild_test_sb 00:21:15.179 ************************************ 00:21:15.179 13:41:14 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 00:21:15.179 13:41:14 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 4 false 00:21:15.179 13:41:14 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:21:15.179 13:41:14 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:15.179 13:41:14 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:15.179 ************************************ 00:21:15.179 START TEST raid5f_state_function_test 00:21:15.179 ************************************ 00:21:15.179 13:41:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 4 false 00:21:15.179 13:41:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:21:15.179 13:41:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:21:15.179 13:41:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:21:15.179 13:41:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:21:15.179 13:41:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:21:15.179 13:41:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:21:15.179 13:41:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:21:15.179 13:41:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:21:15.179 13:41:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:21:15.179 13:41:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:21:15.179 13:41:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:21:15.179 13:41:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:21:15.179 13:41:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:21:15.179 13:41:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:21:15.179 13:41:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:21:15.179 13:41:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:21:15.179 13:41:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:21:15.179 13:41:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:21:15.179 13:41:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:21:15.179 13:41:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:21:15.179 13:41:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:21:15.179 13:41:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:21:15.179 13:41:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:21:15.179 13:41:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:21:15.179 13:41:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:21:15.179 13:41:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:21:15.179 13:41:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:21:15.179 13:41:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:21:15.179 13:41:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:21:15.179 Process raid pid: 82567 00:21:15.179 13:41:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=82567 00:21:15.179 13:41:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:21:15.179 13:41:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 82567' 00:21:15.179 13:41:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 82567 00:21:15.179 13:41:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 82567 ']' 00:21:15.179 13:41:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:15.179 13:41:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:15.180 13:41:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:15.180 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:15.180 13:41:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:15.180 13:41:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:15.180 [2024-11-20 13:41:14.475801] Starting SPDK v25.01-pre git sha1 82b85d9ca / DPDK 24.03.0 initialization... 00:21:15.180 [2024-11-20 13:41:14.476126] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:15.180 [2024-11-20 13:41:14.656880] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:15.439 [2024-11-20 13:41:14.772845] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:15.698 [2024-11-20 13:41:14.998999] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:15.698 [2024-11-20 13:41:14.999044] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:15.959 13:41:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:15.959 13:41:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:21:15.959 13:41:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:21:15.959 13:41:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:15.959 13:41:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:15.959 [2024-11-20 13:41:15.337255] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:15.959 [2024-11-20 13:41:15.337316] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:15.959 [2024-11-20 13:41:15.337328] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:15.959 [2024-11-20 13:41:15.337341] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:15.959 [2024-11-20 13:41:15.337349] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:21:15.959 [2024-11-20 13:41:15.337361] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:21:15.959 [2024-11-20 13:41:15.337369] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:21:15.959 [2024-11-20 13:41:15.337380] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:21:15.959 13:41:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:15.959 13:41:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:21:15.959 13:41:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:15.959 13:41:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:15.959 13:41:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:15.959 13:41:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:15.959 13:41:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:15.959 13:41:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:15.959 13:41:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:15.959 13:41:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:15.959 13:41:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:15.959 13:41:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:15.959 13:41:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:15.959 13:41:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:15.959 13:41:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:15.959 13:41:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:15.959 13:41:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:15.959 "name": "Existed_Raid", 00:21:15.959 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:15.959 "strip_size_kb": 64, 00:21:15.959 "state": "configuring", 00:21:15.959 "raid_level": "raid5f", 00:21:15.959 "superblock": false, 00:21:15.959 "num_base_bdevs": 4, 00:21:15.959 "num_base_bdevs_discovered": 0, 00:21:15.959 "num_base_bdevs_operational": 4, 00:21:15.959 "base_bdevs_list": [ 00:21:15.959 { 00:21:15.959 "name": "BaseBdev1", 00:21:15.959 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:15.959 "is_configured": false, 00:21:15.959 "data_offset": 0, 00:21:15.959 "data_size": 0 00:21:15.959 }, 00:21:15.959 { 00:21:15.959 "name": "BaseBdev2", 00:21:15.959 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:15.959 "is_configured": false, 00:21:15.959 "data_offset": 0, 00:21:15.959 "data_size": 0 00:21:15.959 }, 00:21:15.959 { 00:21:15.959 "name": "BaseBdev3", 00:21:15.959 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:15.959 "is_configured": false, 00:21:15.959 "data_offset": 0, 00:21:15.959 "data_size": 0 00:21:15.959 }, 00:21:15.959 { 00:21:15.959 "name": "BaseBdev4", 00:21:15.959 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:15.959 "is_configured": false, 00:21:15.959 "data_offset": 0, 00:21:15.959 "data_size": 0 00:21:15.959 } 00:21:15.959 ] 00:21:15.959 }' 00:21:15.959 13:41:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:15.959 13:41:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:16.530 13:41:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:21:16.530 13:41:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.530 13:41:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:16.530 [2024-11-20 13:41:15.812913] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:16.530 [2024-11-20 13:41:15.812956] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:21:16.530 13:41:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.530 13:41:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:21:16.530 13:41:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.530 13:41:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:16.530 [2024-11-20 13:41:15.824887] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:16.530 [2024-11-20 13:41:15.825072] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:16.530 [2024-11-20 13:41:15.825094] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:16.530 [2024-11-20 13:41:15.825108] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:16.530 [2024-11-20 13:41:15.825116] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:21:16.530 [2024-11-20 13:41:15.825128] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:21:16.530 [2024-11-20 13:41:15.825136] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:21:16.530 [2024-11-20 13:41:15.825148] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:21:16.530 13:41:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.530 13:41:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:21:16.530 13:41:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.530 13:41:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:16.530 [2024-11-20 13:41:15.874605] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:16.530 BaseBdev1 00:21:16.530 13:41:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.530 13:41:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:21:16.530 13:41:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:21:16.530 13:41:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:21:16.530 13:41:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:21:16.530 13:41:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:21:16.530 13:41:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:21:16.530 13:41:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:21:16.530 13:41:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.530 13:41:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:16.530 13:41:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.530 13:41:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:21:16.530 13:41:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.530 13:41:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:16.530 [ 00:21:16.530 { 00:21:16.530 "name": "BaseBdev1", 00:21:16.530 "aliases": [ 00:21:16.530 "211db352-4b3a-4c6f-8eef-ca4babf326ea" 00:21:16.530 ], 00:21:16.530 "product_name": "Malloc disk", 00:21:16.530 "block_size": 512, 00:21:16.530 "num_blocks": 65536, 00:21:16.530 "uuid": "211db352-4b3a-4c6f-8eef-ca4babf326ea", 00:21:16.530 "assigned_rate_limits": { 00:21:16.530 "rw_ios_per_sec": 0, 00:21:16.530 "rw_mbytes_per_sec": 0, 00:21:16.530 "r_mbytes_per_sec": 0, 00:21:16.530 "w_mbytes_per_sec": 0 00:21:16.530 }, 00:21:16.530 "claimed": true, 00:21:16.530 "claim_type": "exclusive_write", 00:21:16.530 "zoned": false, 00:21:16.530 "supported_io_types": { 00:21:16.530 "read": true, 00:21:16.530 "write": true, 00:21:16.530 "unmap": true, 00:21:16.530 "flush": true, 00:21:16.530 "reset": true, 00:21:16.530 "nvme_admin": false, 00:21:16.530 "nvme_io": false, 00:21:16.530 "nvme_io_md": false, 00:21:16.530 "write_zeroes": true, 00:21:16.530 "zcopy": true, 00:21:16.530 "get_zone_info": false, 00:21:16.530 "zone_management": false, 00:21:16.530 "zone_append": false, 00:21:16.530 "compare": false, 00:21:16.530 "compare_and_write": false, 00:21:16.530 "abort": true, 00:21:16.530 "seek_hole": false, 00:21:16.530 "seek_data": false, 00:21:16.530 "copy": true, 00:21:16.530 "nvme_iov_md": false 00:21:16.530 }, 00:21:16.530 "memory_domains": [ 00:21:16.530 { 00:21:16.530 "dma_device_id": "system", 00:21:16.530 "dma_device_type": 1 00:21:16.530 }, 00:21:16.530 { 00:21:16.530 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:16.530 "dma_device_type": 2 00:21:16.530 } 00:21:16.530 ], 00:21:16.530 "driver_specific": {} 00:21:16.530 } 00:21:16.530 ] 00:21:16.530 13:41:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.530 13:41:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:21:16.530 13:41:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:21:16.530 13:41:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:16.530 13:41:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:16.530 13:41:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:16.530 13:41:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:16.530 13:41:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:16.530 13:41:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:16.530 13:41:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:16.530 13:41:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:16.530 13:41:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:16.530 13:41:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:16.530 13:41:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:16.530 13:41:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.530 13:41:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:16.530 13:41:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.530 13:41:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:16.530 "name": "Existed_Raid", 00:21:16.530 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:16.530 "strip_size_kb": 64, 00:21:16.530 "state": "configuring", 00:21:16.530 "raid_level": "raid5f", 00:21:16.530 "superblock": false, 00:21:16.530 "num_base_bdevs": 4, 00:21:16.530 "num_base_bdevs_discovered": 1, 00:21:16.530 "num_base_bdevs_operational": 4, 00:21:16.530 "base_bdevs_list": [ 00:21:16.530 { 00:21:16.530 "name": "BaseBdev1", 00:21:16.530 "uuid": "211db352-4b3a-4c6f-8eef-ca4babf326ea", 00:21:16.531 "is_configured": true, 00:21:16.531 "data_offset": 0, 00:21:16.531 "data_size": 65536 00:21:16.531 }, 00:21:16.531 { 00:21:16.531 "name": "BaseBdev2", 00:21:16.531 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:16.531 "is_configured": false, 00:21:16.531 "data_offset": 0, 00:21:16.531 "data_size": 0 00:21:16.531 }, 00:21:16.531 { 00:21:16.531 "name": "BaseBdev3", 00:21:16.531 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:16.531 "is_configured": false, 00:21:16.531 "data_offset": 0, 00:21:16.531 "data_size": 0 00:21:16.531 }, 00:21:16.531 { 00:21:16.531 "name": "BaseBdev4", 00:21:16.531 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:16.531 "is_configured": false, 00:21:16.531 "data_offset": 0, 00:21:16.531 "data_size": 0 00:21:16.531 } 00:21:16.531 ] 00:21:16.531 }' 00:21:16.531 13:41:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:16.531 13:41:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:17.099 13:41:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:21:17.099 13:41:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.099 13:41:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:17.099 [2024-11-20 13:41:16.362413] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:17.099 [2024-11-20 13:41:16.362600] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:21:17.099 13:41:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.099 13:41:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:21:17.099 13:41:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.099 13:41:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:17.099 [2024-11-20 13:41:16.374483] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:17.099 [2024-11-20 13:41:16.376638] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:17.099 [2024-11-20 13:41:16.376693] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:17.099 [2024-11-20 13:41:16.376705] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:21:17.099 [2024-11-20 13:41:16.376720] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:21:17.099 [2024-11-20 13:41:16.376729] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:21:17.099 [2024-11-20 13:41:16.376741] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:21:17.099 13:41:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.099 13:41:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:21:17.099 13:41:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:21:17.099 13:41:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:21:17.099 13:41:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:17.099 13:41:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:17.099 13:41:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:17.099 13:41:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:17.099 13:41:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:17.099 13:41:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:17.099 13:41:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:17.099 13:41:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:17.099 13:41:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:17.099 13:41:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:17.099 13:41:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:17.099 13:41:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.099 13:41:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:17.099 13:41:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.099 13:41:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:17.099 "name": "Existed_Raid", 00:21:17.099 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:17.099 "strip_size_kb": 64, 00:21:17.099 "state": "configuring", 00:21:17.099 "raid_level": "raid5f", 00:21:17.099 "superblock": false, 00:21:17.099 "num_base_bdevs": 4, 00:21:17.099 "num_base_bdevs_discovered": 1, 00:21:17.099 "num_base_bdevs_operational": 4, 00:21:17.099 "base_bdevs_list": [ 00:21:17.099 { 00:21:17.099 "name": "BaseBdev1", 00:21:17.099 "uuid": "211db352-4b3a-4c6f-8eef-ca4babf326ea", 00:21:17.099 "is_configured": true, 00:21:17.099 "data_offset": 0, 00:21:17.099 "data_size": 65536 00:21:17.099 }, 00:21:17.099 { 00:21:17.099 "name": "BaseBdev2", 00:21:17.099 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:17.099 "is_configured": false, 00:21:17.099 "data_offset": 0, 00:21:17.099 "data_size": 0 00:21:17.099 }, 00:21:17.099 { 00:21:17.099 "name": "BaseBdev3", 00:21:17.099 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:17.099 "is_configured": false, 00:21:17.099 "data_offset": 0, 00:21:17.099 "data_size": 0 00:21:17.099 }, 00:21:17.099 { 00:21:17.099 "name": "BaseBdev4", 00:21:17.099 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:17.099 "is_configured": false, 00:21:17.099 "data_offset": 0, 00:21:17.099 "data_size": 0 00:21:17.099 } 00:21:17.099 ] 00:21:17.099 }' 00:21:17.099 13:41:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:17.099 13:41:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:17.359 13:41:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:21:17.359 13:41:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.359 13:41:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:17.359 [2024-11-20 13:41:16.842830] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:17.618 BaseBdev2 00:21:17.618 13:41:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.618 13:41:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:21:17.618 13:41:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:21:17.618 13:41:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:21:17.618 13:41:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:21:17.618 13:41:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:21:17.618 13:41:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:21:17.618 13:41:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:21:17.618 13:41:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.618 13:41:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:17.618 13:41:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.618 13:41:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:21:17.618 13:41:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.618 13:41:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:17.618 [ 00:21:17.618 { 00:21:17.618 "name": "BaseBdev2", 00:21:17.618 "aliases": [ 00:21:17.618 "13564353-0319-430b-ba5d-51134ddd04c4" 00:21:17.618 ], 00:21:17.618 "product_name": "Malloc disk", 00:21:17.618 "block_size": 512, 00:21:17.618 "num_blocks": 65536, 00:21:17.618 "uuid": "13564353-0319-430b-ba5d-51134ddd04c4", 00:21:17.618 "assigned_rate_limits": { 00:21:17.618 "rw_ios_per_sec": 0, 00:21:17.618 "rw_mbytes_per_sec": 0, 00:21:17.618 "r_mbytes_per_sec": 0, 00:21:17.618 "w_mbytes_per_sec": 0 00:21:17.618 }, 00:21:17.618 "claimed": true, 00:21:17.618 "claim_type": "exclusive_write", 00:21:17.618 "zoned": false, 00:21:17.618 "supported_io_types": { 00:21:17.618 "read": true, 00:21:17.618 "write": true, 00:21:17.618 "unmap": true, 00:21:17.618 "flush": true, 00:21:17.618 "reset": true, 00:21:17.618 "nvme_admin": false, 00:21:17.618 "nvme_io": false, 00:21:17.618 "nvme_io_md": false, 00:21:17.618 "write_zeroes": true, 00:21:17.618 "zcopy": true, 00:21:17.618 "get_zone_info": false, 00:21:17.618 "zone_management": false, 00:21:17.619 "zone_append": false, 00:21:17.619 "compare": false, 00:21:17.619 "compare_and_write": false, 00:21:17.619 "abort": true, 00:21:17.619 "seek_hole": false, 00:21:17.619 "seek_data": false, 00:21:17.619 "copy": true, 00:21:17.619 "nvme_iov_md": false 00:21:17.619 }, 00:21:17.619 "memory_domains": [ 00:21:17.619 { 00:21:17.619 "dma_device_id": "system", 00:21:17.619 "dma_device_type": 1 00:21:17.619 }, 00:21:17.619 { 00:21:17.619 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:17.619 "dma_device_type": 2 00:21:17.619 } 00:21:17.619 ], 00:21:17.619 "driver_specific": {} 00:21:17.619 } 00:21:17.619 ] 00:21:17.619 13:41:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.619 13:41:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:21:17.619 13:41:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:21:17.619 13:41:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:21:17.619 13:41:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:21:17.619 13:41:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:17.619 13:41:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:17.619 13:41:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:17.619 13:41:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:17.619 13:41:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:17.619 13:41:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:17.619 13:41:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:17.619 13:41:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:17.619 13:41:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:17.619 13:41:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:17.619 13:41:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:17.619 13:41:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.619 13:41:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:17.619 13:41:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.619 13:41:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:17.619 "name": "Existed_Raid", 00:21:17.619 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:17.619 "strip_size_kb": 64, 00:21:17.619 "state": "configuring", 00:21:17.619 "raid_level": "raid5f", 00:21:17.619 "superblock": false, 00:21:17.619 "num_base_bdevs": 4, 00:21:17.619 "num_base_bdevs_discovered": 2, 00:21:17.619 "num_base_bdevs_operational": 4, 00:21:17.619 "base_bdevs_list": [ 00:21:17.619 { 00:21:17.619 "name": "BaseBdev1", 00:21:17.619 "uuid": "211db352-4b3a-4c6f-8eef-ca4babf326ea", 00:21:17.619 "is_configured": true, 00:21:17.619 "data_offset": 0, 00:21:17.619 "data_size": 65536 00:21:17.619 }, 00:21:17.619 { 00:21:17.619 "name": "BaseBdev2", 00:21:17.619 "uuid": "13564353-0319-430b-ba5d-51134ddd04c4", 00:21:17.619 "is_configured": true, 00:21:17.619 "data_offset": 0, 00:21:17.619 "data_size": 65536 00:21:17.619 }, 00:21:17.619 { 00:21:17.619 "name": "BaseBdev3", 00:21:17.619 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:17.619 "is_configured": false, 00:21:17.619 "data_offset": 0, 00:21:17.619 "data_size": 0 00:21:17.619 }, 00:21:17.619 { 00:21:17.619 "name": "BaseBdev4", 00:21:17.619 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:17.619 "is_configured": false, 00:21:17.619 "data_offset": 0, 00:21:17.619 "data_size": 0 00:21:17.619 } 00:21:17.619 ] 00:21:17.619 }' 00:21:17.619 13:41:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:17.619 13:41:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:17.878 13:41:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:21:17.878 13:41:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.878 13:41:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:17.878 [2024-11-20 13:41:17.346201] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:17.878 BaseBdev3 00:21:17.878 13:41:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.878 13:41:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:21:17.878 13:41:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:21:17.878 13:41:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:21:17.878 13:41:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:21:17.878 13:41:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:21:17.878 13:41:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:21:17.878 13:41:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:21:17.878 13:41:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.878 13:41:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:17.878 13:41:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.879 13:41:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:21:17.879 13:41:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.879 13:41:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:18.140 [ 00:21:18.140 { 00:21:18.140 "name": "BaseBdev3", 00:21:18.140 "aliases": [ 00:21:18.140 "363552f0-a326-4c04-bab6-8fb005a929a8" 00:21:18.140 ], 00:21:18.140 "product_name": "Malloc disk", 00:21:18.140 "block_size": 512, 00:21:18.140 "num_blocks": 65536, 00:21:18.140 "uuid": "363552f0-a326-4c04-bab6-8fb005a929a8", 00:21:18.140 "assigned_rate_limits": { 00:21:18.140 "rw_ios_per_sec": 0, 00:21:18.140 "rw_mbytes_per_sec": 0, 00:21:18.140 "r_mbytes_per_sec": 0, 00:21:18.140 "w_mbytes_per_sec": 0 00:21:18.140 }, 00:21:18.140 "claimed": true, 00:21:18.140 "claim_type": "exclusive_write", 00:21:18.140 "zoned": false, 00:21:18.140 "supported_io_types": { 00:21:18.140 "read": true, 00:21:18.140 "write": true, 00:21:18.140 "unmap": true, 00:21:18.140 "flush": true, 00:21:18.140 "reset": true, 00:21:18.140 "nvme_admin": false, 00:21:18.140 "nvme_io": false, 00:21:18.140 "nvme_io_md": false, 00:21:18.140 "write_zeroes": true, 00:21:18.140 "zcopy": true, 00:21:18.140 "get_zone_info": false, 00:21:18.140 "zone_management": false, 00:21:18.140 "zone_append": false, 00:21:18.140 "compare": false, 00:21:18.140 "compare_and_write": false, 00:21:18.140 "abort": true, 00:21:18.140 "seek_hole": false, 00:21:18.140 "seek_data": false, 00:21:18.140 "copy": true, 00:21:18.140 "nvme_iov_md": false 00:21:18.140 }, 00:21:18.140 "memory_domains": [ 00:21:18.140 { 00:21:18.140 "dma_device_id": "system", 00:21:18.140 "dma_device_type": 1 00:21:18.140 }, 00:21:18.140 { 00:21:18.140 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:18.140 "dma_device_type": 2 00:21:18.140 } 00:21:18.140 ], 00:21:18.140 "driver_specific": {} 00:21:18.140 } 00:21:18.140 ] 00:21:18.140 13:41:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.140 13:41:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:21:18.140 13:41:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:21:18.140 13:41:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:21:18.140 13:41:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:21:18.140 13:41:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:18.140 13:41:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:18.140 13:41:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:18.140 13:41:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:18.140 13:41:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:18.140 13:41:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:18.140 13:41:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:18.140 13:41:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:18.140 13:41:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:18.140 13:41:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:18.140 13:41:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:18.140 13:41:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.140 13:41:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:18.140 13:41:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.140 13:41:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:18.140 "name": "Existed_Raid", 00:21:18.140 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:18.140 "strip_size_kb": 64, 00:21:18.140 "state": "configuring", 00:21:18.140 "raid_level": "raid5f", 00:21:18.140 "superblock": false, 00:21:18.140 "num_base_bdevs": 4, 00:21:18.140 "num_base_bdevs_discovered": 3, 00:21:18.140 "num_base_bdevs_operational": 4, 00:21:18.140 "base_bdevs_list": [ 00:21:18.140 { 00:21:18.140 "name": "BaseBdev1", 00:21:18.140 "uuid": "211db352-4b3a-4c6f-8eef-ca4babf326ea", 00:21:18.140 "is_configured": true, 00:21:18.140 "data_offset": 0, 00:21:18.140 "data_size": 65536 00:21:18.140 }, 00:21:18.140 { 00:21:18.140 "name": "BaseBdev2", 00:21:18.140 "uuid": "13564353-0319-430b-ba5d-51134ddd04c4", 00:21:18.140 "is_configured": true, 00:21:18.140 "data_offset": 0, 00:21:18.140 "data_size": 65536 00:21:18.140 }, 00:21:18.140 { 00:21:18.140 "name": "BaseBdev3", 00:21:18.140 "uuid": "363552f0-a326-4c04-bab6-8fb005a929a8", 00:21:18.140 "is_configured": true, 00:21:18.140 "data_offset": 0, 00:21:18.140 "data_size": 65536 00:21:18.140 }, 00:21:18.140 { 00:21:18.140 "name": "BaseBdev4", 00:21:18.140 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:18.140 "is_configured": false, 00:21:18.140 "data_offset": 0, 00:21:18.140 "data_size": 0 00:21:18.140 } 00:21:18.140 ] 00:21:18.140 }' 00:21:18.140 13:41:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:18.140 13:41:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:18.401 13:41:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:21:18.401 13:41:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.401 13:41:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:18.401 [2024-11-20 13:41:17.859784] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:21:18.401 [2024-11-20 13:41:17.859865] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:21:18.401 [2024-11-20 13:41:17.859875] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:21:18.401 [2024-11-20 13:41:17.860187] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:21:18.401 [2024-11-20 13:41:17.868321] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:21:18.401 [2024-11-20 13:41:17.868356] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:21:18.401 [2024-11-20 13:41:17.868672] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:18.401 BaseBdev4 00:21:18.401 13:41:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.401 13:41:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:21:18.401 13:41:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:21:18.401 13:41:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:21:18.401 13:41:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:21:18.401 13:41:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:21:18.401 13:41:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:21:18.401 13:41:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:21:18.401 13:41:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.401 13:41:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:18.401 13:41:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.401 13:41:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:21:18.401 13:41:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.401 13:41:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:18.660 [ 00:21:18.660 { 00:21:18.660 "name": "BaseBdev4", 00:21:18.660 "aliases": [ 00:21:18.660 "81acc159-8cf9-49a4-b604-7dfc8b97c453" 00:21:18.660 ], 00:21:18.660 "product_name": "Malloc disk", 00:21:18.660 "block_size": 512, 00:21:18.660 "num_blocks": 65536, 00:21:18.660 "uuid": "81acc159-8cf9-49a4-b604-7dfc8b97c453", 00:21:18.660 "assigned_rate_limits": { 00:21:18.660 "rw_ios_per_sec": 0, 00:21:18.660 "rw_mbytes_per_sec": 0, 00:21:18.660 "r_mbytes_per_sec": 0, 00:21:18.660 "w_mbytes_per_sec": 0 00:21:18.660 }, 00:21:18.660 "claimed": true, 00:21:18.660 "claim_type": "exclusive_write", 00:21:18.660 "zoned": false, 00:21:18.660 "supported_io_types": { 00:21:18.660 "read": true, 00:21:18.660 "write": true, 00:21:18.660 "unmap": true, 00:21:18.660 "flush": true, 00:21:18.660 "reset": true, 00:21:18.660 "nvme_admin": false, 00:21:18.660 "nvme_io": false, 00:21:18.660 "nvme_io_md": false, 00:21:18.660 "write_zeroes": true, 00:21:18.660 "zcopy": true, 00:21:18.660 "get_zone_info": false, 00:21:18.660 "zone_management": false, 00:21:18.660 "zone_append": false, 00:21:18.660 "compare": false, 00:21:18.660 "compare_and_write": false, 00:21:18.660 "abort": true, 00:21:18.660 "seek_hole": false, 00:21:18.660 "seek_data": false, 00:21:18.660 "copy": true, 00:21:18.660 "nvme_iov_md": false 00:21:18.660 }, 00:21:18.660 "memory_domains": [ 00:21:18.660 { 00:21:18.660 "dma_device_id": "system", 00:21:18.660 "dma_device_type": 1 00:21:18.660 }, 00:21:18.660 { 00:21:18.660 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:18.660 "dma_device_type": 2 00:21:18.660 } 00:21:18.660 ], 00:21:18.660 "driver_specific": {} 00:21:18.660 } 00:21:18.660 ] 00:21:18.660 13:41:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.660 13:41:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:21:18.661 13:41:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:21:18.661 13:41:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:21:18.661 13:41:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:21:18.661 13:41:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:18.661 13:41:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:18.661 13:41:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:18.661 13:41:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:18.661 13:41:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:18.661 13:41:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:18.661 13:41:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:18.661 13:41:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:18.661 13:41:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:18.661 13:41:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:18.661 13:41:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.661 13:41:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:18.661 13:41:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:18.661 13:41:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.661 13:41:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:18.661 "name": "Existed_Raid", 00:21:18.661 "uuid": "8a4b3369-229f-406d-b13c-d6d6e40afbd2", 00:21:18.661 "strip_size_kb": 64, 00:21:18.661 "state": "online", 00:21:18.661 "raid_level": "raid5f", 00:21:18.661 "superblock": false, 00:21:18.661 "num_base_bdevs": 4, 00:21:18.661 "num_base_bdevs_discovered": 4, 00:21:18.661 "num_base_bdevs_operational": 4, 00:21:18.661 "base_bdevs_list": [ 00:21:18.661 { 00:21:18.661 "name": "BaseBdev1", 00:21:18.661 "uuid": "211db352-4b3a-4c6f-8eef-ca4babf326ea", 00:21:18.661 "is_configured": true, 00:21:18.661 "data_offset": 0, 00:21:18.661 "data_size": 65536 00:21:18.661 }, 00:21:18.661 { 00:21:18.661 "name": "BaseBdev2", 00:21:18.661 "uuid": "13564353-0319-430b-ba5d-51134ddd04c4", 00:21:18.661 "is_configured": true, 00:21:18.661 "data_offset": 0, 00:21:18.661 "data_size": 65536 00:21:18.661 }, 00:21:18.661 { 00:21:18.661 "name": "BaseBdev3", 00:21:18.661 "uuid": "363552f0-a326-4c04-bab6-8fb005a929a8", 00:21:18.661 "is_configured": true, 00:21:18.661 "data_offset": 0, 00:21:18.661 "data_size": 65536 00:21:18.661 }, 00:21:18.661 { 00:21:18.661 "name": "BaseBdev4", 00:21:18.661 "uuid": "81acc159-8cf9-49a4-b604-7dfc8b97c453", 00:21:18.661 "is_configured": true, 00:21:18.661 "data_offset": 0, 00:21:18.661 "data_size": 65536 00:21:18.661 } 00:21:18.661 ] 00:21:18.661 }' 00:21:18.661 13:41:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:18.661 13:41:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:18.920 13:41:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:21:18.920 13:41:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:21:18.920 13:41:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:21:18.920 13:41:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:21:18.920 13:41:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:21:18.920 13:41:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:21:18.920 13:41:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:21:18.921 13:41:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.921 13:41:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:21:18.921 13:41:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:18.921 [2024-11-20 13:41:18.344871] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:18.921 13:41:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.921 13:41:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:18.921 "name": "Existed_Raid", 00:21:18.921 "aliases": [ 00:21:18.921 "8a4b3369-229f-406d-b13c-d6d6e40afbd2" 00:21:18.921 ], 00:21:18.921 "product_name": "Raid Volume", 00:21:18.921 "block_size": 512, 00:21:18.921 "num_blocks": 196608, 00:21:18.921 "uuid": "8a4b3369-229f-406d-b13c-d6d6e40afbd2", 00:21:18.921 "assigned_rate_limits": { 00:21:18.921 "rw_ios_per_sec": 0, 00:21:18.921 "rw_mbytes_per_sec": 0, 00:21:18.921 "r_mbytes_per_sec": 0, 00:21:18.921 "w_mbytes_per_sec": 0 00:21:18.921 }, 00:21:18.921 "claimed": false, 00:21:18.921 "zoned": false, 00:21:18.921 "supported_io_types": { 00:21:18.921 "read": true, 00:21:18.921 "write": true, 00:21:18.921 "unmap": false, 00:21:18.921 "flush": false, 00:21:18.921 "reset": true, 00:21:18.921 "nvme_admin": false, 00:21:18.921 "nvme_io": false, 00:21:18.921 "nvme_io_md": false, 00:21:18.921 "write_zeroes": true, 00:21:18.921 "zcopy": false, 00:21:18.921 "get_zone_info": false, 00:21:18.921 "zone_management": false, 00:21:18.921 "zone_append": false, 00:21:18.921 "compare": false, 00:21:18.921 "compare_and_write": false, 00:21:18.921 "abort": false, 00:21:18.921 "seek_hole": false, 00:21:18.921 "seek_data": false, 00:21:18.921 "copy": false, 00:21:18.921 "nvme_iov_md": false 00:21:18.921 }, 00:21:18.921 "driver_specific": { 00:21:18.921 "raid": { 00:21:18.921 "uuid": "8a4b3369-229f-406d-b13c-d6d6e40afbd2", 00:21:18.921 "strip_size_kb": 64, 00:21:18.921 "state": "online", 00:21:18.921 "raid_level": "raid5f", 00:21:18.921 "superblock": false, 00:21:18.921 "num_base_bdevs": 4, 00:21:18.921 "num_base_bdevs_discovered": 4, 00:21:18.921 "num_base_bdevs_operational": 4, 00:21:18.921 "base_bdevs_list": [ 00:21:18.921 { 00:21:18.921 "name": "BaseBdev1", 00:21:18.921 "uuid": "211db352-4b3a-4c6f-8eef-ca4babf326ea", 00:21:18.921 "is_configured": true, 00:21:18.921 "data_offset": 0, 00:21:18.921 "data_size": 65536 00:21:18.921 }, 00:21:18.921 { 00:21:18.921 "name": "BaseBdev2", 00:21:18.921 "uuid": "13564353-0319-430b-ba5d-51134ddd04c4", 00:21:18.921 "is_configured": true, 00:21:18.921 "data_offset": 0, 00:21:18.921 "data_size": 65536 00:21:18.921 }, 00:21:18.921 { 00:21:18.921 "name": "BaseBdev3", 00:21:18.921 "uuid": "363552f0-a326-4c04-bab6-8fb005a929a8", 00:21:18.921 "is_configured": true, 00:21:18.921 "data_offset": 0, 00:21:18.921 "data_size": 65536 00:21:18.921 }, 00:21:18.921 { 00:21:18.921 "name": "BaseBdev4", 00:21:18.921 "uuid": "81acc159-8cf9-49a4-b604-7dfc8b97c453", 00:21:18.921 "is_configured": true, 00:21:18.921 "data_offset": 0, 00:21:18.921 "data_size": 65536 00:21:18.921 } 00:21:18.921 ] 00:21:18.921 } 00:21:18.921 } 00:21:18.921 }' 00:21:18.921 13:41:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:19.180 13:41:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:21:19.180 BaseBdev2 00:21:19.180 BaseBdev3 00:21:19.180 BaseBdev4' 00:21:19.180 13:41:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:19.180 13:41:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:21:19.180 13:41:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:19.180 13:41:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:19.180 13:41:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:21:19.180 13:41:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.181 13:41:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:19.181 13:41:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.181 13:41:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:19.181 13:41:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:19.181 13:41:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:19.181 13:41:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:21:19.181 13:41:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.181 13:41:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:19.181 13:41:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:19.181 13:41:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.181 13:41:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:19.181 13:41:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:19.181 13:41:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:19.181 13:41:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:21:19.181 13:41:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.181 13:41:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:19.181 13:41:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:19.181 13:41:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.181 13:41:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:19.181 13:41:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:19.181 13:41:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:19.181 13:41:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:19.181 13:41:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:21:19.181 13:41:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.181 13:41:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:19.181 13:41:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.181 13:41:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:19.181 13:41:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:19.181 13:41:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:21:19.181 13:41:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.181 13:41:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:19.181 [2024-11-20 13:41:18.652302] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:19.440 13:41:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.440 13:41:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:21:19.440 13:41:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:21:19.440 13:41:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:21:19.440 13:41:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:21:19.440 13:41:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:21:19.440 13:41:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:21:19.440 13:41:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:19.440 13:41:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:19.440 13:41:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:19.440 13:41:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:19.440 13:41:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:19.440 13:41:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:19.440 13:41:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:19.440 13:41:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:19.440 13:41:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:19.440 13:41:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:19.440 13:41:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:19.440 13:41:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.440 13:41:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:19.440 13:41:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.440 13:41:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:19.440 "name": "Existed_Raid", 00:21:19.440 "uuid": "8a4b3369-229f-406d-b13c-d6d6e40afbd2", 00:21:19.440 "strip_size_kb": 64, 00:21:19.440 "state": "online", 00:21:19.440 "raid_level": "raid5f", 00:21:19.440 "superblock": false, 00:21:19.440 "num_base_bdevs": 4, 00:21:19.440 "num_base_bdevs_discovered": 3, 00:21:19.440 "num_base_bdevs_operational": 3, 00:21:19.440 "base_bdevs_list": [ 00:21:19.440 { 00:21:19.440 "name": null, 00:21:19.440 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:19.440 "is_configured": false, 00:21:19.440 "data_offset": 0, 00:21:19.440 "data_size": 65536 00:21:19.440 }, 00:21:19.440 { 00:21:19.440 "name": "BaseBdev2", 00:21:19.440 "uuid": "13564353-0319-430b-ba5d-51134ddd04c4", 00:21:19.440 "is_configured": true, 00:21:19.440 "data_offset": 0, 00:21:19.440 "data_size": 65536 00:21:19.440 }, 00:21:19.440 { 00:21:19.440 "name": "BaseBdev3", 00:21:19.440 "uuid": "363552f0-a326-4c04-bab6-8fb005a929a8", 00:21:19.440 "is_configured": true, 00:21:19.440 "data_offset": 0, 00:21:19.440 "data_size": 65536 00:21:19.440 }, 00:21:19.440 { 00:21:19.440 "name": "BaseBdev4", 00:21:19.440 "uuid": "81acc159-8cf9-49a4-b604-7dfc8b97c453", 00:21:19.440 "is_configured": true, 00:21:19.440 "data_offset": 0, 00:21:19.440 "data_size": 65536 00:21:19.440 } 00:21:19.440 ] 00:21:19.440 }' 00:21:19.440 13:41:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:19.440 13:41:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:19.699 13:41:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:21:19.699 13:41:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:21:19.699 13:41:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:19.699 13:41:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:21:19.699 13:41:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.699 13:41:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:19.699 13:41:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.956 13:41:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:21:19.956 13:41:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:21:19.956 13:41:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:21:19.956 13:41:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.956 13:41:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:19.956 [2024-11-20 13:41:19.199502] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:21:19.956 [2024-11-20 13:41:19.199752] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:19.956 [2024-11-20 13:41:19.294452] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:19.956 13:41:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.956 13:41:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:21:19.956 13:41:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:21:19.956 13:41:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:19.956 13:41:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:21:19.956 13:41:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.956 13:41:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:19.956 13:41:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.956 13:41:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:21:19.956 13:41:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:21:19.956 13:41:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:21:19.956 13:41:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.956 13:41:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:19.956 [2024-11-20 13:41:19.346448] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:21:20.215 13:41:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:20.215 13:41:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:21:20.215 13:41:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:21:20.215 13:41:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:20.215 13:41:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:20.215 13:41:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:21:20.215 13:41:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:20.215 13:41:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:20.215 13:41:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:21:20.215 13:41:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:21:20.215 13:41:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:21:20.215 13:41:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:20.215 13:41:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:20.215 [2024-11-20 13:41:19.496858] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:21:20.215 [2024-11-20 13:41:19.497030] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:21:20.215 13:41:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:20.215 13:41:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:21:20.215 13:41:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:21:20.215 13:41:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:20.215 13:41:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:21:20.215 13:41:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:20.215 13:41:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:20.215 13:41:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:20.215 13:41:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:21:20.215 13:41:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:21:20.215 13:41:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:21:20.215 13:41:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:21:20.215 13:41:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:21:20.215 13:41:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:21:20.215 13:41:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:20.215 13:41:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:20.215 BaseBdev2 00:21:20.215 13:41:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:20.215 13:41:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:21:20.215 13:41:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:21:20.215 13:41:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:21:20.215 13:41:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:21:20.215 13:41:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:21:20.215 13:41:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:21:20.215 13:41:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:21:20.215 13:41:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:20.215 13:41:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:20.215 13:41:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:20.215 13:41:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:21:20.215 13:41:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:20.215 13:41:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:20.475 [ 00:21:20.475 { 00:21:20.475 "name": "BaseBdev2", 00:21:20.475 "aliases": [ 00:21:20.475 "da2ddd67-7a7e-42c5-bd5e-4226b5b882a4" 00:21:20.475 ], 00:21:20.475 "product_name": "Malloc disk", 00:21:20.475 "block_size": 512, 00:21:20.475 "num_blocks": 65536, 00:21:20.475 "uuid": "da2ddd67-7a7e-42c5-bd5e-4226b5b882a4", 00:21:20.475 "assigned_rate_limits": { 00:21:20.475 "rw_ios_per_sec": 0, 00:21:20.475 "rw_mbytes_per_sec": 0, 00:21:20.475 "r_mbytes_per_sec": 0, 00:21:20.475 "w_mbytes_per_sec": 0 00:21:20.475 }, 00:21:20.475 "claimed": false, 00:21:20.475 "zoned": false, 00:21:20.475 "supported_io_types": { 00:21:20.475 "read": true, 00:21:20.475 "write": true, 00:21:20.475 "unmap": true, 00:21:20.475 "flush": true, 00:21:20.475 "reset": true, 00:21:20.475 "nvme_admin": false, 00:21:20.475 "nvme_io": false, 00:21:20.475 "nvme_io_md": false, 00:21:20.475 "write_zeroes": true, 00:21:20.475 "zcopy": true, 00:21:20.475 "get_zone_info": false, 00:21:20.475 "zone_management": false, 00:21:20.475 "zone_append": false, 00:21:20.475 "compare": false, 00:21:20.475 "compare_and_write": false, 00:21:20.475 "abort": true, 00:21:20.476 "seek_hole": false, 00:21:20.476 "seek_data": false, 00:21:20.476 "copy": true, 00:21:20.476 "nvme_iov_md": false 00:21:20.476 }, 00:21:20.476 "memory_domains": [ 00:21:20.476 { 00:21:20.476 "dma_device_id": "system", 00:21:20.476 "dma_device_type": 1 00:21:20.476 }, 00:21:20.476 { 00:21:20.476 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:20.476 "dma_device_type": 2 00:21:20.476 } 00:21:20.476 ], 00:21:20.476 "driver_specific": {} 00:21:20.476 } 00:21:20.476 ] 00:21:20.476 13:41:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:20.476 13:41:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:21:20.476 13:41:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:21:20.476 13:41:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:21:20.476 13:41:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:21:20.476 13:41:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:20.476 13:41:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:20.476 BaseBdev3 00:21:20.476 13:41:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:20.476 13:41:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:21:20.476 13:41:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:21:20.476 13:41:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:21:20.476 13:41:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:21:20.476 13:41:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:21:20.476 13:41:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:21:20.476 13:41:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:21:20.476 13:41:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:20.476 13:41:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:20.476 13:41:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:20.476 13:41:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:21:20.476 13:41:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:20.476 13:41:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:20.476 [ 00:21:20.476 { 00:21:20.476 "name": "BaseBdev3", 00:21:20.476 "aliases": [ 00:21:20.476 "798e4cdf-c2f3-48f3-996b-6feaf10122fd" 00:21:20.476 ], 00:21:20.476 "product_name": "Malloc disk", 00:21:20.476 "block_size": 512, 00:21:20.476 "num_blocks": 65536, 00:21:20.476 "uuid": "798e4cdf-c2f3-48f3-996b-6feaf10122fd", 00:21:20.476 "assigned_rate_limits": { 00:21:20.476 "rw_ios_per_sec": 0, 00:21:20.476 "rw_mbytes_per_sec": 0, 00:21:20.476 "r_mbytes_per_sec": 0, 00:21:20.476 "w_mbytes_per_sec": 0 00:21:20.476 }, 00:21:20.476 "claimed": false, 00:21:20.476 "zoned": false, 00:21:20.476 "supported_io_types": { 00:21:20.476 "read": true, 00:21:20.476 "write": true, 00:21:20.476 "unmap": true, 00:21:20.476 "flush": true, 00:21:20.476 "reset": true, 00:21:20.476 "nvme_admin": false, 00:21:20.476 "nvme_io": false, 00:21:20.476 "nvme_io_md": false, 00:21:20.476 "write_zeroes": true, 00:21:20.476 "zcopy": true, 00:21:20.476 "get_zone_info": false, 00:21:20.476 "zone_management": false, 00:21:20.476 "zone_append": false, 00:21:20.476 "compare": false, 00:21:20.476 "compare_and_write": false, 00:21:20.476 "abort": true, 00:21:20.476 "seek_hole": false, 00:21:20.476 "seek_data": false, 00:21:20.476 "copy": true, 00:21:20.476 "nvme_iov_md": false 00:21:20.476 }, 00:21:20.476 "memory_domains": [ 00:21:20.476 { 00:21:20.476 "dma_device_id": "system", 00:21:20.476 "dma_device_type": 1 00:21:20.476 }, 00:21:20.476 { 00:21:20.476 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:20.476 "dma_device_type": 2 00:21:20.476 } 00:21:20.476 ], 00:21:20.476 "driver_specific": {} 00:21:20.476 } 00:21:20.476 ] 00:21:20.476 13:41:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:20.476 13:41:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:21:20.476 13:41:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:21:20.476 13:41:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:21:20.476 13:41:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:21:20.476 13:41:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:20.476 13:41:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:20.476 BaseBdev4 00:21:20.476 13:41:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:20.476 13:41:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:21:20.476 13:41:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:21:20.476 13:41:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:21:20.476 13:41:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:21:20.476 13:41:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:21:20.476 13:41:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:21:20.476 13:41:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:21:20.476 13:41:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:20.476 13:41:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:20.476 13:41:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:20.476 13:41:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:21:20.476 13:41:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:20.476 13:41:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:20.476 [ 00:21:20.476 { 00:21:20.476 "name": "BaseBdev4", 00:21:20.476 "aliases": [ 00:21:20.476 "321c6d36-a40c-4115-99e2-b40d3b75df8d" 00:21:20.476 ], 00:21:20.476 "product_name": "Malloc disk", 00:21:20.476 "block_size": 512, 00:21:20.476 "num_blocks": 65536, 00:21:20.476 "uuid": "321c6d36-a40c-4115-99e2-b40d3b75df8d", 00:21:20.476 "assigned_rate_limits": { 00:21:20.476 "rw_ios_per_sec": 0, 00:21:20.476 "rw_mbytes_per_sec": 0, 00:21:20.476 "r_mbytes_per_sec": 0, 00:21:20.476 "w_mbytes_per_sec": 0 00:21:20.476 }, 00:21:20.476 "claimed": false, 00:21:20.476 "zoned": false, 00:21:20.476 "supported_io_types": { 00:21:20.476 "read": true, 00:21:20.476 "write": true, 00:21:20.476 "unmap": true, 00:21:20.476 "flush": true, 00:21:20.476 "reset": true, 00:21:20.476 "nvme_admin": false, 00:21:20.476 "nvme_io": false, 00:21:20.476 "nvme_io_md": false, 00:21:20.476 "write_zeroes": true, 00:21:20.476 "zcopy": true, 00:21:20.476 "get_zone_info": false, 00:21:20.476 "zone_management": false, 00:21:20.476 "zone_append": false, 00:21:20.476 "compare": false, 00:21:20.476 "compare_and_write": false, 00:21:20.476 "abort": true, 00:21:20.476 "seek_hole": false, 00:21:20.476 "seek_data": false, 00:21:20.476 "copy": true, 00:21:20.476 "nvme_iov_md": false 00:21:20.476 }, 00:21:20.476 "memory_domains": [ 00:21:20.476 { 00:21:20.476 "dma_device_id": "system", 00:21:20.476 "dma_device_type": 1 00:21:20.476 }, 00:21:20.476 { 00:21:20.476 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:20.476 "dma_device_type": 2 00:21:20.476 } 00:21:20.476 ], 00:21:20.476 "driver_specific": {} 00:21:20.476 } 00:21:20.476 ] 00:21:20.476 13:41:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:20.476 13:41:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:21:20.476 13:41:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:21:20.476 13:41:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:21:20.476 13:41:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:21:20.476 13:41:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:20.476 13:41:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:20.477 [2024-11-20 13:41:19.869664] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:20.477 [2024-11-20 13:41:19.870204] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:20.477 [2024-11-20 13:41:19.870249] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:20.477 [2024-11-20 13:41:19.872352] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:20.477 [2024-11-20 13:41:19.872397] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:21:20.477 13:41:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:20.477 13:41:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:21:20.477 13:41:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:20.477 13:41:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:20.477 13:41:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:20.477 13:41:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:20.477 13:41:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:20.477 13:41:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:20.477 13:41:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:20.477 13:41:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:20.477 13:41:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:20.477 13:41:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:20.477 13:41:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:20.477 13:41:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:20.477 13:41:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:20.477 13:41:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:20.477 13:41:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:20.477 "name": "Existed_Raid", 00:21:20.477 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:20.477 "strip_size_kb": 64, 00:21:20.477 "state": "configuring", 00:21:20.477 "raid_level": "raid5f", 00:21:20.477 "superblock": false, 00:21:20.477 "num_base_bdevs": 4, 00:21:20.477 "num_base_bdevs_discovered": 3, 00:21:20.477 "num_base_bdevs_operational": 4, 00:21:20.477 "base_bdevs_list": [ 00:21:20.477 { 00:21:20.477 "name": "BaseBdev1", 00:21:20.477 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:20.477 "is_configured": false, 00:21:20.477 "data_offset": 0, 00:21:20.477 "data_size": 0 00:21:20.477 }, 00:21:20.477 { 00:21:20.477 "name": "BaseBdev2", 00:21:20.477 "uuid": "da2ddd67-7a7e-42c5-bd5e-4226b5b882a4", 00:21:20.477 "is_configured": true, 00:21:20.477 "data_offset": 0, 00:21:20.477 "data_size": 65536 00:21:20.477 }, 00:21:20.477 { 00:21:20.477 "name": "BaseBdev3", 00:21:20.477 "uuid": "798e4cdf-c2f3-48f3-996b-6feaf10122fd", 00:21:20.477 "is_configured": true, 00:21:20.477 "data_offset": 0, 00:21:20.477 "data_size": 65536 00:21:20.477 }, 00:21:20.477 { 00:21:20.477 "name": "BaseBdev4", 00:21:20.477 "uuid": "321c6d36-a40c-4115-99e2-b40d3b75df8d", 00:21:20.477 "is_configured": true, 00:21:20.477 "data_offset": 0, 00:21:20.477 "data_size": 65536 00:21:20.477 } 00:21:20.477 ] 00:21:20.477 }' 00:21:20.477 13:41:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:20.477 13:41:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:21.044 13:41:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:21:21.045 13:41:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:21.045 13:41:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:21.045 [2024-11-20 13:41:20.285099] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:21:21.045 13:41:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:21.045 13:41:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:21:21.045 13:41:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:21.045 13:41:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:21.045 13:41:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:21.045 13:41:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:21.045 13:41:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:21.045 13:41:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:21.045 13:41:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:21.045 13:41:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:21.045 13:41:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:21.045 13:41:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:21.045 13:41:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:21.045 13:41:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:21.045 13:41:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:21.045 13:41:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:21.045 13:41:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:21.045 "name": "Existed_Raid", 00:21:21.045 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:21.045 "strip_size_kb": 64, 00:21:21.045 "state": "configuring", 00:21:21.045 "raid_level": "raid5f", 00:21:21.045 "superblock": false, 00:21:21.045 "num_base_bdevs": 4, 00:21:21.045 "num_base_bdevs_discovered": 2, 00:21:21.045 "num_base_bdevs_operational": 4, 00:21:21.045 "base_bdevs_list": [ 00:21:21.045 { 00:21:21.045 "name": "BaseBdev1", 00:21:21.045 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:21.045 "is_configured": false, 00:21:21.045 "data_offset": 0, 00:21:21.045 "data_size": 0 00:21:21.045 }, 00:21:21.045 { 00:21:21.045 "name": null, 00:21:21.045 "uuid": "da2ddd67-7a7e-42c5-bd5e-4226b5b882a4", 00:21:21.045 "is_configured": false, 00:21:21.045 "data_offset": 0, 00:21:21.045 "data_size": 65536 00:21:21.045 }, 00:21:21.045 { 00:21:21.045 "name": "BaseBdev3", 00:21:21.045 "uuid": "798e4cdf-c2f3-48f3-996b-6feaf10122fd", 00:21:21.045 "is_configured": true, 00:21:21.045 "data_offset": 0, 00:21:21.045 "data_size": 65536 00:21:21.045 }, 00:21:21.045 { 00:21:21.045 "name": "BaseBdev4", 00:21:21.045 "uuid": "321c6d36-a40c-4115-99e2-b40d3b75df8d", 00:21:21.045 "is_configured": true, 00:21:21.045 "data_offset": 0, 00:21:21.045 "data_size": 65536 00:21:21.045 } 00:21:21.045 ] 00:21:21.045 }' 00:21:21.045 13:41:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:21.045 13:41:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:21.305 13:41:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:21.305 13:41:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:21.305 13:41:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:21.305 13:41:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:21:21.305 13:41:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:21.305 13:41:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:21:21.305 13:41:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:21:21.305 13:41:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:21.305 13:41:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:21.565 BaseBdev1 00:21:21.565 [2024-11-20 13:41:20.794923] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:21.565 13:41:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:21.565 13:41:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:21:21.565 13:41:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:21:21.565 13:41:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:21:21.565 13:41:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:21:21.565 13:41:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:21:21.565 13:41:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:21:21.565 13:41:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:21:21.565 13:41:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:21.565 13:41:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:21.565 13:41:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:21.565 13:41:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:21:21.565 13:41:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:21.565 13:41:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:21.565 [ 00:21:21.565 { 00:21:21.565 "name": "BaseBdev1", 00:21:21.565 "aliases": [ 00:21:21.565 "249fb161-3a16-4bf9-a35b-59eed93e710e" 00:21:21.565 ], 00:21:21.565 "product_name": "Malloc disk", 00:21:21.565 "block_size": 512, 00:21:21.565 "num_blocks": 65536, 00:21:21.565 "uuid": "249fb161-3a16-4bf9-a35b-59eed93e710e", 00:21:21.565 "assigned_rate_limits": { 00:21:21.565 "rw_ios_per_sec": 0, 00:21:21.565 "rw_mbytes_per_sec": 0, 00:21:21.565 "r_mbytes_per_sec": 0, 00:21:21.565 "w_mbytes_per_sec": 0 00:21:21.565 }, 00:21:21.565 "claimed": true, 00:21:21.565 "claim_type": "exclusive_write", 00:21:21.565 "zoned": false, 00:21:21.565 "supported_io_types": { 00:21:21.565 "read": true, 00:21:21.565 "write": true, 00:21:21.565 "unmap": true, 00:21:21.565 "flush": true, 00:21:21.565 "reset": true, 00:21:21.565 "nvme_admin": false, 00:21:21.565 "nvme_io": false, 00:21:21.565 "nvme_io_md": false, 00:21:21.565 "write_zeroes": true, 00:21:21.565 "zcopy": true, 00:21:21.565 "get_zone_info": false, 00:21:21.565 "zone_management": false, 00:21:21.565 "zone_append": false, 00:21:21.565 "compare": false, 00:21:21.565 "compare_and_write": false, 00:21:21.565 "abort": true, 00:21:21.565 "seek_hole": false, 00:21:21.565 "seek_data": false, 00:21:21.565 "copy": true, 00:21:21.565 "nvme_iov_md": false 00:21:21.565 }, 00:21:21.565 "memory_domains": [ 00:21:21.565 { 00:21:21.565 "dma_device_id": "system", 00:21:21.565 "dma_device_type": 1 00:21:21.565 }, 00:21:21.565 { 00:21:21.565 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:21.565 "dma_device_type": 2 00:21:21.565 } 00:21:21.565 ], 00:21:21.565 "driver_specific": {} 00:21:21.565 } 00:21:21.565 ] 00:21:21.565 13:41:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:21.565 13:41:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:21:21.565 13:41:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:21:21.565 13:41:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:21.565 13:41:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:21.565 13:41:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:21.565 13:41:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:21.565 13:41:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:21.565 13:41:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:21.565 13:41:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:21.565 13:41:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:21.565 13:41:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:21.565 13:41:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:21.565 13:41:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:21.565 13:41:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:21.565 13:41:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:21.565 13:41:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:21.565 13:41:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:21.565 "name": "Existed_Raid", 00:21:21.565 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:21.565 "strip_size_kb": 64, 00:21:21.565 "state": "configuring", 00:21:21.565 "raid_level": "raid5f", 00:21:21.565 "superblock": false, 00:21:21.565 "num_base_bdevs": 4, 00:21:21.565 "num_base_bdevs_discovered": 3, 00:21:21.565 "num_base_bdevs_operational": 4, 00:21:21.565 "base_bdevs_list": [ 00:21:21.565 { 00:21:21.565 "name": "BaseBdev1", 00:21:21.565 "uuid": "249fb161-3a16-4bf9-a35b-59eed93e710e", 00:21:21.565 "is_configured": true, 00:21:21.565 "data_offset": 0, 00:21:21.565 "data_size": 65536 00:21:21.565 }, 00:21:21.565 { 00:21:21.565 "name": null, 00:21:21.565 "uuid": "da2ddd67-7a7e-42c5-bd5e-4226b5b882a4", 00:21:21.565 "is_configured": false, 00:21:21.565 "data_offset": 0, 00:21:21.565 "data_size": 65536 00:21:21.565 }, 00:21:21.565 { 00:21:21.565 "name": "BaseBdev3", 00:21:21.565 "uuid": "798e4cdf-c2f3-48f3-996b-6feaf10122fd", 00:21:21.565 "is_configured": true, 00:21:21.565 "data_offset": 0, 00:21:21.565 "data_size": 65536 00:21:21.565 }, 00:21:21.565 { 00:21:21.565 "name": "BaseBdev4", 00:21:21.565 "uuid": "321c6d36-a40c-4115-99e2-b40d3b75df8d", 00:21:21.565 "is_configured": true, 00:21:21.565 "data_offset": 0, 00:21:21.565 "data_size": 65536 00:21:21.565 } 00:21:21.565 ] 00:21:21.565 }' 00:21:21.565 13:41:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:21.566 13:41:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:21.825 13:41:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:21.825 13:41:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:21.825 13:41:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:21.825 13:41:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:21:21.825 13:41:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:21.825 13:41:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:21:21.825 13:41:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:21:21.825 13:41:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:21.825 13:41:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:21.825 [2024-11-20 13:41:21.298440] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:21:21.825 13:41:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:21.825 13:41:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:21:21.825 13:41:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:21.825 13:41:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:21.825 13:41:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:21.825 13:41:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:21.825 13:41:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:21.825 13:41:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:21.825 13:41:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:21.826 13:41:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:21.826 13:41:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:21.826 13:41:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:21.826 13:41:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:21.826 13:41:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:21.826 13:41:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:22.085 13:41:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:22.085 13:41:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:22.085 "name": "Existed_Raid", 00:21:22.085 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:22.085 "strip_size_kb": 64, 00:21:22.085 "state": "configuring", 00:21:22.085 "raid_level": "raid5f", 00:21:22.085 "superblock": false, 00:21:22.085 "num_base_bdevs": 4, 00:21:22.085 "num_base_bdevs_discovered": 2, 00:21:22.085 "num_base_bdevs_operational": 4, 00:21:22.085 "base_bdevs_list": [ 00:21:22.085 { 00:21:22.085 "name": "BaseBdev1", 00:21:22.085 "uuid": "249fb161-3a16-4bf9-a35b-59eed93e710e", 00:21:22.085 "is_configured": true, 00:21:22.085 "data_offset": 0, 00:21:22.085 "data_size": 65536 00:21:22.085 }, 00:21:22.085 { 00:21:22.085 "name": null, 00:21:22.085 "uuid": "da2ddd67-7a7e-42c5-bd5e-4226b5b882a4", 00:21:22.085 "is_configured": false, 00:21:22.085 "data_offset": 0, 00:21:22.085 "data_size": 65536 00:21:22.085 }, 00:21:22.085 { 00:21:22.085 "name": null, 00:21:22.085 "uuid": "798e4cdf-c2f3-48f3-996b-6feaf10122fd", 00:21:22.085 "is_configured": false, 00:21:22.085 "data_offset": 0, 00:21:22.085 "data_size": 65536 00:21:22.085 }, 00:21:22.085 { 00:21:22.085 "name": "BaseBdev4", 00:21:22.085 "uuid": "321c6d36-a40c-4115-99e2-b40d3b75df8d", 00:21:22.085 "is_configured": true, 00:21:22.085 "data_offset": 0, 00:21:22.085 "data_size": 65536 00:21:22.085 } 00:21:22.085 ] 00:21:22.085 }' 00:21:22.085 13:41:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:22.085 13:41:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:22.346 13:41:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:22.346 13:41:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:22.346 13:41:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:21:22.346 13:41:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:22.346 13:41:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:22.346 13:41:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:21:22.346 13:41:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:21:22.346 13:41:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:22.346 13:41:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:22.346 [2024-11-20 13:41:21.754418] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:22.346 13:41:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:22.346 13:41:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:21:22.346 13:41:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:22.346 13:41:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:22.346 13:41:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:22.346 13:41:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:22.346 13:41:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:22.346 13:41:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:22.346 13:41:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:22.346 13:41:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:22.346 13:41:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:22.346 13:41:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:22.346 13:41:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:22.346 13:41:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:22.346 13:41:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:22.346 13:41:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:22.346 13:41:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:22.346 "name": "Existed_Raid", 00:21:22.346 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:22.346 "strip_size_kb": 64, 00:21:22.346 "state": "configuring", 00:21:22.346 "raid_level": "raid5f", 00:21:22.346 "superblock": false, 00:21:22.346 "num_base_bdevs": 4, 00:21:22.346 "num_base_bdevs_discovered": 3, 00:21:22.346 "num_base_bdevs_operational": 4, 00:21:22.346 "base_bdevs_list": [ 00:21:22.346 { 00:21:22.346 "name": "BaseBdev1", 00:21:22.346 "uuid": "249fb161-3a16-4bf9-a35b-59eed93e710e", 00:21:22.346 "is_configured": true, 00:21:22.346 "data_offset": 0, 00:21:22.346 "data_size": 65536 00:21:22.346 }, 00:21:22.346 { 00:21:22.346 "name": null, 00:21:22.346 "uuid": "da2ddd67-7a7e-42c5-bd5e-4226b5b882a4", 00:21:22.346 "is_configured": false, 00:21:22.346 "data_offset": 0, 00:21:22.346 "data_size": 65536 00:21:22.346 }, 00:21:22.346 { 00:21:22.346 "name": "BaseBdev3", 00:21:22.346 "uuid": "798e4cdf-c2f3-48f3-996b-6feaf10122fd", 00:21:22.346 "is_configured": true, 00:21:22.346 "data_offset": 0, 00:21:22.346 "data_size": 65536 00:21:22.346 }, 00:21:22.346 { 00:21:22.346 "name": "BaseBdev4", 00:21:22.346 "uuid": "321c6d36-a40c-4115-99e2-b40d3b75df8d", 00:21:22.346 "is_configured": true, 00:21:22.346 "data_offset": 0, 00:21:22.346 "data_size": 65536 00:21:22.346 } 00:21:22.346 ] 00:21:22.346 }' 00:21:22.346 13:41:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:22.346 13:41:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:22.935 13:41:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:22.935 13:41:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:21:22.935 13:41:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:22.935 13:41:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:22.935 13:41:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:22.935 13:41:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:21:22.935 13:41:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:21:22.935 13:41:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:22.935 13:41:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:22.935 [2024-11-20 13:41:22.206287] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:22.935 13:41:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:22.935 13:41:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:21:22.935 13:41:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:22.935 13:41:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:22.935 13:41:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:22.935 13:41:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:22.935 13:41:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:22.935 13:41:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:22.935 13:41:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:22.935 13:41:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:22.935 13:41:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:22.935 13:41:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:22.935 13:41:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:22.935 13:41:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:22.935 13:41:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:22.935 13:41:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:22.935 13:41:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:22.935 "name": "Existed_Raid", 00:21:22.935 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:22.935 "strip_size_kb": 64, 00:21:22.935 "state": "configuring", 00:21:22.935 "raid_level": "raid5f", 00:21:22.935 "superblock": false, 00:21:22.935 "num_base_bdevs": 4, 00:21:22.935 "num_base_bdevs_discovered": 2, 00:21:22.935 "num_base_bdevs_operational": 4, 00:21:22.935 "base_bdevs_list": [ 00:21:22.935 { 00:21:22.935 "name": null, 00:21:22.935 "uuid": "249fb161-3a16-4bf9-a35b-59eed93e710e", 00:21:22.935 "is_configured": false, 00:21:22.935 "data_offset": 0, 00:21:22.935 "data_size": 65536 00:21:22.935 }, 00:21:22.935 { 00:21:22.935 "name": null, 00:21:22.935 "uuid": "da2ddd67-7a7e-42c5-bd5e-4226b5b882a4", 00:21:22.935 "is_configured": false, 00:21:22.935 "data_offset": 0, 00:21:22.935 "data_size": 65536 00:21:22.935 }, 00:21:22.935 { 00:21:22.935 "name": "BaseBdev3", 00:21:22.935 "uuid": "798e4cdf-c2f3-48f3-996b-6feaf10122fd", 00:21:22.935 "is_configured": true, 00:21:22.935 "data_offset": 0, 00:21:22.935 "data_size": 65536 00:21:22.935 }, 00:21:22.935 { 00:21:22.935 "name": "BaseBdev4", 00:21:22.935 "uuid": "321c6d36-a40c-4115-99e2-b40d3b75df8d", 00:21:22.935 "is_configured": true, 00:21:22.935 "data_offset": 0, 00:21:22.935 "data_size": 65536 00:21:22.935 } 00:21:22.935 ] 00:21:22.935 }' 00:21:22.935 13:41:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:22.935 13:41:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:23.510 13:41:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:23.510 13:41:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.510 13:41:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:21:23.510 13:41:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:23.510 13:41:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.510 13:41:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:21:23.510 13:41:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:21:23.510 13:41:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.510 13:41:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:23.510 [2024-11-20 13:41:22.781019] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:23.510 13:41:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.510 13:41:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:21:23.510 13:41:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:23.510 13:41:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:23.510 13:41:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:23.510 13:41:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:23.510 13:41:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:23.510 13:41:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:23.510 13:41:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:23.510 13:41:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:23.510 13:41:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:23.510 13:41:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:23.510 13:41:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.510 13:41:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:23.510 13:41:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:23.510 13:41:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.510 13:41:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:23.510 "name": "Existed_Raid", 00:21:23.510 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:23.510 "strip_size_kb": 64, 00:21:23.510 "state": "configuring", 00:21:23.510 "raid_level": "raid5f", 00:21:23.510 "superblock": false, 00:21:23.510 "num_base_bdevs": 4, 00:21:23.510 "num_base_bdevs_discovered": 3, 00:21:23.510 "num_base_bdevs_operational": 4, 00:21:23.510 "base_bdevs_list": [ 00:21:23.510 { 00:21:23.510 "name": null, 00:21:23.510 "uuid": "249fb161-3a16-4bf9-a35b-59eed93e710e", 00:21:23.510 "is_configured": false, 00:21:23.511 "data_offset": 0, 00:21:23.511 "data_size": 65536 00:21:23.511 }, 00:21:23.511 { 00:21:23.511 "name": "BaseBdev2", 00:21:23.511 "uuid": "da2ddd67-7a7e-42c5-bd5e-4226b5b882a4", 00:21:23.511 "is_configured": true, 00:21:23.511 "data_offset": 0, 00:21:23.511 "data_size": 65536 00:21:23.511 }, 00:21:23.511 { 00:21:23.511 "name": "BaseBdev3", 00:21:23.511 "uuid": "798e4cdf-c2f3-48f3-996b-6feaf10122fd", 00:21:23.511 "is_configured": true, 00:21:23.511 "data_offset": 0, 00:21:23.511 "data_size": 65536 00:21:23.511 }, 00:21:23.511 { 00:21:23.511 "name": "BaseBdev4", 00:21:23.511 "uuid": "321c6d36-a40c-4115-99e2-b40d3b75df8d", 00:21:23.511 "is_configured": true, 00:21:23.511 "data_offset": 0, 00:21:23.511 "data_size": 65536 00:21:23.511 } 00:21:23.511 ] 00:21:23.511 }' 00:21:23.511 13:41:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:23.511 13:41:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:23.769 13:41:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:23.769 13:41:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.769 13:41:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:21:23.769 13:41:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:23.770 13:41:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.027 13:41:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:21:24.027 13:41:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:24.027 13:41:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.027 13:41:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:24.027 13:41:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:21:24.027 13:41:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.028 13:41:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 249fb161-3a16-4bf9-a35b-59eed93e710e 00:21:24.028 13:41:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.028 13:41:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:24.028 [2024-11-20 13:41:23.342691] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:21:24.028 [2024-11-20 13:41:23.342761] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:21:24.028 [2024-11-20 13:41:23.342772] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:21:24.028 [2024-11-20 13:41:23.343099] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:21:24.028 [2024-11-20 13:41:23.351511] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:21:24.028 [2024-11-20 13:41:23.351566] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:21:24.028 [2024-11-20 13:41:23.351876] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:24.028 NewBaseBdev 00:21:24.028 13:41:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.028 13:41:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:21:24.028 13:41:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:21:24.028 13:41:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:21:24.028 13:41:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:21:24.028 13:41:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:21:24.028 13:41:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:21:24.028 13:41:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:21:24.028 13:41:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.028 13:41:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:24.028 13:41:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.028 13:41:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:21:24.028 13:41:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.028 13:41:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:24.028 [ 00:21:24.028 { 00:21:24.028 "name": "NewBaseBdev", 00:21:24.028 "aliases": [ 00:21:24.028 "249fb161-3a16-4bf9-a35b-59eed93e710e" 00:21:24.028 ], 00:21:24.028 "product_name": "Malloc disk", 00:21:24.028 "block_size": 512, 00:21:24.028 "num_blocks": 65536, 00:21:24.028 "uuid": "249fb161-3a16-4bf9-a35b-59eed93e710e", 00:21:24.028 "assigned_rate_limits": { 00:21:24.028 "rw_ios_per_sec": 0, 00:21:24.028 "rw_mbytes_per_sec": 0, 00:21:24.028 "r_mbytes_per_sec": 0, 00:21:24.028 "w_mbytes_per_sec": 0 00:21:24.028 }, 00:21:24.028 "claimed": true, 00:21:24.028 "claim_type": "exclusive_write", 00:21:24.028 "zoned": false, 00:21:24.028 "supported_io_types": { 00:21:24.028 "read": true, 00:21:24.028 "write": true, 00:21:24.028 "unmap": true, 00:21:24.028 "flush": true, 00:21:24.028 "reset": true, 00:21:24.028 "nvme_admin": false, 00:21:24.028 "nvme_io": false, 00:21:24.028 "nvme_io_md": false, 00:21:24.028 "write_zeroes": true, 00:21:24.028 "zcopy": true, 00:21:24.028 "get_zone_info": false, 00:21:24.028 "zone_management": false, 00:21:24.028 "zone_append": false, 00:21:24.028 "compare": false, 00:21:24.028 "compare_and_write": false, 00:21:24.028 "abort": true, 00:21:24.028 "seek_hole": false, 00:21:24.028 "seek_data": false, 00:21:24.028 "copy": true, 00:21:24.028 "nvme_iov_md": false 00:21:24.028 }, 00:21:24.028 "memory_domains": [ 00:21:24.028 { 00:21:24.028 "dma_device_id": "system", 00:21:24.028 "dma_device_type": 1 00:21:24.028 }, 00:21:24.028 { 00:21:24.028 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:24.028 "dma_device_type": 2 00:21:24.028 } 00:21:24.028 ], 00:21:24.028 "driver_specific": {} 00:21:24.028 } 00:21:24.028 ] 00:21:24.028 13:41:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.028 13:41:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:21:24.028 13:41:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:21:24.028 13:41:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:24.028 13:41:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:24.028 13:41:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:24.028 13:41:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:24.028 13:41:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:24.028 13:41:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:24.028 13:41:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:24.028 13:41:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:24.028 13:41:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:24.028 13:41:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:24.028 13:41:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:24.028 13:41:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.028 13:41:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:24.028 13:41:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.028 13:41:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:24.028 "name": "Existed_Raid", 00:21:24.028 "uuid": "f5c3393e-3147-4e73-802c-7be5c7a94f10", 00:21:24.028 "strip_size_kb": 64, 00:21:24.028 "state": "online", 00:21:24.028 "raid_level": "raid5f", 00:21:24.028 "superblock": false, 00:21:24.028 "num_base_bdevs": 4, 00:21:24.028 "num_base_bdevs_discovered": 4, 00:21:24.028 "num_base_bdevs_operational": 4, 00:21:24.028 "base_bdevs_list": [ 00:21:24.028 { 00:21:24.028 "name": "NewBaseBdev", 00:21:24.028 "uuid": "249fb161-3a16-4bf9-a35b-59eed93e710e", 00:21:24.028 "is_configured": true, 00:21:24.028 "data_offset": 0, 00:21:24.028 "data_size": 65536 00:21:24.028 }, 00:21:24.028 { 00:21:24.028 "name": "BaseBdev2", 00:21:24.028 "uuid": "da2ddd67-7a7e-42c5-bd5e-4226b5b882a4", 00:21:24.028 "is_configured": true, 00:21:24.028 "data_offset": 0, 00:21:24.028 "data_size": 65536 00:21:24.028 }, 00:21:24.028 { 00:21:24.028 "name": "BaseBdev3", 00:21:24.028 "uuid": "798e4cdf-c2f3-48f3-996b-6feaf10122fd", 00:21:24.028 "is_configured": true, 00:21:24.028 "data_offset": 0, 00:21:24.028 "data_size": 65536 00:21:24.028 }, 00:21:24.028 { 00:21:24.028 "name": "BaseBdev4", 00:21:24.028 "uuid": "321c6d36-a40c-4115-99e2-b40d3b75df8d", 00:21:24.028 "is_configured": true, 00:21:24.028 "data_offset": 0, 00:21:24.028 "data_size": 65536 00:21:24.028 } 00:21:24.028 ] 00:21:24.028 }' 00:21:24.028 13:41:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:24.028 13:41:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:24.286 13:41:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:21:24.286 13:41:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:21:24.286 13:41:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:21:24.286 13:41:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:21:24.286 13:41:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:21:24.286 13:41:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:21:24.286 13:41:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:21:24.286 13:41:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.286 13:41:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:24.286 13:41:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:21:24.286 [2024-11-20 13:41:23.744303] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:24.286 13:41:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.544 13:41:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:24.544 "name": "Existed_Raid", 00:21:24.544 "aliases": [ 00:21:24.544 "f5c3393e-3147-4e73-802c-7be5c7a94f10" 00:21:24.544 ], 00:21:24.544 "product_name": "Raid Volume", 00:21:24.544 "block_size": 512, 00:21:24.544 "num_blocks": 196608, 00:21:24.544 "uuid": "f5c3393e-3147-4e73-802c-7be5c7a94f10", 00:21:24.544 "assigned_rate_limits": { 00:21:24.544 "rw_ios_per_sec": 0, 00:21:24.544 "rw_mbytes_per_sec": 0, 00:21:24.544 "r_mbytes_per_sec": 0, 00:21:24.544 "w_mbytes_per_sec": 0 00:21:24.544 }, 00:21:24.544 "claimed": false, 00:21:24.544 "zoned": false, 00:21:24.544 "supported_io_types": { 00:21:24.544 "read": true, 00:21:24.544 "write": true, 00:21:24.544 "unmap": false, 00:21:24.544 "flush": false, 00:21:24.544 "reset": true, 00:21:24.544 "nvme_admin": false, 00:21:24.544 "nvme_io": false, 00:21:24.545 "nvme_io_md": false, 00:21:24.545 "write_zeroes": true, 00:21:24.545 "zcopy": false, 00:21:24.545 "get_zone_info": false, 00:21:24.545 "zone_management": false, 00:21:24.545 "zone_append": false, 00:21:24.545 "compare": false, 00:21:24.545 "compare_and_write": false, 00:21:24.545 "abort": false, 00:21:24.545 "seek_hole": false, 00:21:24.545 "seek_data": false, 00:21:24.545 "copy": false, 00:21:24.545 "nvme_iov_md": false 00:21:24.545 }, 00:21:24.545 "driver_specific": { 00:21:24.545 "raid": { 00:21:24.545 "uuid": "f5c3393e-3147-4e73-802c-7be5c7a94f10", 00:21:24.545 "strip_size_kb": 64, 00:21:24.545 "state": "online", 00:21:24.545 "raid_level": "raid5f", 00:21:24.545 "superblock": false, 00:21:24.545 "num_base_bdevs": 4, 00:21:24.545 "num_base_bdevs_discovered": 4, 00:21:24.545 "num_base_bdevs_operational": 4, 00:21:24.545 "base_bdevs_list": [ 00:21:24.545 { 00:21:24.545 "name": "NewBaseBdev", 00:21:24.545 "uuid": "249fb161-3a16-4bf9-a35b-59eed93e710e", 00:21:24.545 "is_configured": true, 00:21:24.545 "data_offset": 0, 00:21:24.545 "data_size": 65536 00:21:24.545 }, 00:21:24.545 { 00:21:24.545 "name": "BaseBdev2", 00:21:24.545 "uuid": "da2ddd67-7a7e-42c5-bd5e-4226b5b882a4", 00:21:24.545 "is_configured": true, 00:21:24.545 "data_offset": 0, 00:21:24.545 "data_size": 65536 00:21:24.545 }, 00:21:24.545 { 00:21:24.545 "name": "BaseBdev3", 00:21:24.545 "uuid": "798e4cdf-c2f3-48f3-996b-6feaf10122fd", 00:21:24.545 "is_configured": true, 00:21:24.545 "data_offset": 0, 00:21:24.545 "data_size": 65536 00:21:24.545 }, 00:21:24.545 { 00:21:24.545 "name": "BaseBdev4", 00:21:24.545 "uuid": "321c6d36-a40c-4115-99e2-b40d3b75df8d", 00:21:24.545 "is_configured": true, 00:21:24.545 "data_offset": 0, 00:21:24.545 "data_size": 65536 00:21:24.545 } 00:21:24.545 ] 00:21:24.545 } 00:21:24.545 } 00:21:24.545 }' 00:21:24.545 13:41:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:24.545 13:41:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:21:24.545 BaseBdev2 00:21:24.545 BaseBdev3 00:21:24.545 BaseBdev4' 00:21:24.545 13:41:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:24.545 13:41:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:21:24.545 13:41:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:24.545 13:41:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:21:24.545 13:41:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:24.545 13:41:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.545 13:41:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:24.545 13:41:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.545 13:41:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:24.545 13:41:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:24.545 13:41:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:24.545 13:41:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:21:24.545 13:41:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:24.545 13:41:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.545 13:41:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:24.545 13:41:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.545 13:41:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:24.545 13:41:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:24.545 13:41:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:24.545 13:41:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:21:24.545 13:41:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.545 13:41:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:24.545 13:41:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:24.545 13:41:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.545 13:41:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:24.545 13:41:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:24.545 13:41:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:24.545 13:41:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:21:24.545 13:41:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:24.545 13:41:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.545 13:41:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:24.545 13:41:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.545 13:41:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:24.545 13:41:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:24.545 13:41:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:21:24.545 13:41:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.545 13:41:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:24.545 [2024-11-20 13:41:24.003679] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:24.545 [2024-11-20 13:41:24.003725] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:24.545 [2024-11-20 13:41:24.003825] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:24.545 [2024-11-20 13:41:24.004181] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:24.545 [2024-11-20 13:41:24.004207] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:21:24.545 13:41:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.545 13:41:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 82567 00:21:24.545 13:41:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 82567 ']' 00:21:24.545 13:41:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@958 -- # kill -0 82567 00:21:24.545 13:41:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # uname 00:21:24.545 13:41:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:24.545 13:41:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82567 00:21:24.803 13:41:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:24.803 13:41:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:24.803 killing process with pid 82567 00:21:24.803 13:41:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82567' 00:21:24.803 13:41:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@973 -- # kill 82567 00:21:24.803 13:41:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@978 -- # wait 82567 00:21:24.803 [2024-11-20 13:41:24.034357] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:25.062 [2024-11-20 13:41:24.447141] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:26.533 13:41:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:21:26.533 00:21:26.533 real 0m11.263s 00:21:26.533 user 0m17.794s 00:21:26.533 sys 0m2.238s 00:21:26.533 13:41:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:26.533 ************************************ 00:21:26.533 END TEST raid5f_state_function_test 00:21:26.533 ************************************ 00:21:26.533 13:41:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:26.533 13:41:25 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 4 true 00:21:26.533 13:41:25 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:21:26.533 13:41:25 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:26.533 13:41:25 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:26.533 ************************************ 00:21:26.533 START TEST raid5f_state_function_test_sb 00:21:26.533 ************************************ 00:21:26.533 13:41:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 4 true 00:21:26.533 13:41:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:21:26.533 13:41:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:21:26.533 13:41:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:21:26.533 13:41:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:21:26.533 13:41:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:21:26.533 13:41:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:21:26.533 13:41:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:21:26.533 13:41:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:21:26.533 13:41:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:21:26.533 13:41:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:21:26.533 13:41:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:21:26.533 13:41:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:21:26.533 13:41:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:21:26.533 13:41:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:21:26.533 13:41:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:21:26.533 13:41:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:21:26.533 13:41:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:21:26.533 13:41:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:21:26.533 13:41:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:21:26.533 13:41:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:21:26.533 13:41:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:21:26.533 13:41:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:21:26.533 13:41:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:21:26.533 13:41:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:21:26.533 13:41:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:21:26.533 13:41:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:21:26.533 13:41:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:21:26.533 13:41:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:21:26.533 13:41:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:21:26.533 13:41:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=83240 00:21:26.533 13:41:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:21:26.533 Process raid pid: 83240 00:21:26.533 13:41:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 83240' 00:21:26.533 13:41:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 83240 00:21:26.533 13:41:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 83240 ']' 00:21:26.533 13:41:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:26.533 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:26.533 13:41:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:26.533 13:41:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:26.533 13:41:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:26.533 13:41:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:26.533 [2024-11-20 13:41:25.765648] Starting SPDK v25.01-pre git sha1 82b85d9ca / DPDK 24.03.0 initialization... 00:21:26.534 [2024-11-20 13:41:25.765817] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:26.534 [2024-11-20 13:41:25.937023] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:26.793 [2024-11-20 13:41:26.070722] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:27.051 [2024-11-20 13:41:26.306462] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:27.051 [2024-11-20 13:41:26.306514] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:27.310 13:41:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:27.310 13:41:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:21:27.310 13:41:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:21:27.310 13:41:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:27.310 13:41:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:27.310 [2024-11-20 13:41:26.770492] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:27.310 [2024-11-20 13:41:26.770547] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:27.310 [2024-11-20 13:41:26.770560] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:27.310 [2024-11-20 13:41:26.770575] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:27.310 [2024-11-20 13:41:26.770584] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:21:27.310 [2024-11-20 13:41:26.770597] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:21:27.310 [2024-11-20 13:41:26.770606] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:21:27.310 [2024-11-20 13:41:26.770619] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:21:27.310 13:41:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:27.310 13:41:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:21:27.310 13:41:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:27.310 13:41:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:27.310 13:41:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:27.310 13:41:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:27.310 13:41:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:27.310 13:41:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:27.310 13:41:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:27.310 13:41:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:27.310 13:41:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:27.310 13:41:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:27.310 13:41:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:27.310 13:41:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:27.310 13:41:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:27.568 13:41:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:27.568 13:41:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:27.568 "name": "Existed_Raid", 00:21:27.568 "uuid": "70306d48-a5c2-4e3a-a3c8-e2e143b71d8f", 00:21:27.568 "strip_size_kb": 64, 00:21:27.568 "state": "configuring", 00:21:27.568 "raid_level": "raid5f", 00:21:27.568 "superblock": true, 00:21:27.568 "num_base_bdevs": 4, 00:21:27.568 "num_base_bdevs_discovered": 0, 00:21:27.568 "num_base_bdevs_operational": 4, 00:21:27.568 "base_bdevs_list": [ 00:21:27.568 { 00:21:27.568 "name": "BaseBdev1", 00:21:27.568 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:27.568 "is_configured": false, 00:21:27.568 "data_offset": 0, 00:21:27.568 "data_size": 0 00:21:27.568 }, 00:21:27.568 { 00:21:27.568 "name": "BaseBdev2", 00:21:27.568 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:27.568 "is_configured": false, 00:21:27.568 "data_offset": 0, 00:21:27.568 "data_size": 0 00:21:27.568 }, 00:21:27.568 { 00:21:27.568 "name": "BaseBdev3", 00:21:27.568 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:27.568 "is_configured": false, 00:21:27.568 "data_offset": 0, 00:21:27.568 "data_size": 0 00:21:27.568 }, 00:21:27.568 { 00:21:27.568 "name": "BaseBdev4", 00:21:27.568 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:27.568 "is_configured": false, 00:21:27.568 "data_offset": 0, 00:21:27.568 "data_size": 0 00:21:27.568 } 00:21:27.568 ] 00:21:27.568 }' 00:21:27.568 13:41:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:27.568 13:41:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:27.827 13:41:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:21:27.827 13:41:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:27.827 13:41:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:27.827 [2024-11-20 13:41:27.182452] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:27.827 [2024-11-20 13:41:27.182497] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:21:27.827 13:41:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:27.827 13:41:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:21:27.827 13:41:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:27.827 13:41:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:27.827 [2024-11-20 13:41:27.190487] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:27.827 [2024-11-20 13:41:27.190545] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:27.827 [2024-11-20 13:41:27.190561] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:27.827 [2024-11-20 13:41:27.190580] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:27.827 [2024-11-20 13:41:27.190593] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:21:27.827 [2024-11-20 13:41:27.190612] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:21:27.827 [2024-11-20 13:41:27.190624] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:21:27.827 [2024-11-20 13:41:27.190654] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:21:27.827 13:41:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:27.827 13:41:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:21:27.827 13:41:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:27.827 13:41:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:27.827 [2024-11-20 13:41:27.239442] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:27.827 BaseBdev1 00:21:27.827 13:41:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:27.827 13:41:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:21:27.827 13:41:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:21:27.827 13:41:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:21:27.827 13:41:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:21:27.827 13:41:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:21:27.827 13:41:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:21:27.827 13:41:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:21:27.827 13:41:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:27.827 13:41:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:27.828 13:41:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:27.828 13:41:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:21:27.828 13:41:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:27.828 13:41:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:27.828 [ 00:21:27.828 { 00:21:27.828 "name": "BaseBdev1", 00:21:27.828 "aliases": [ 00:21:27.828 "32a8ea21-746c-4f83-b158-ee94593a5e17" 00:21:27.828 ], 00:21:27.828 "product_name": "Malloc disk", 00:21:27.828 "block_size": 512, 00:21:27.828 "num_blocks": 65536, 00:21:27.828 "uuid": "32a8ea21-746c-4f83-b158-ee94593a5e17", 00:21:27.828 "assigned_rate_limits": { 00:21:27.828 "rw_ios_per_sec": 0, 00:21:27.828 "rw_mbytes_per_sec": 0, 00:21:27.828 "r_mbytes_per_sec": 0, 00:21:27.828 "w_mbytes_per_sec": 0 00:21:27.828 }, 00:21:27.828 "claimed": true, 00:21:27.828 "claim_type": "exclusive_write", 00:21:27.828 "zoned": false, 00:21:27.828 "supported_io_types": { 00:21:27.828 "read": true, 00:21:27.828 "write": true, 00:21:27.828 "unmap": true, 00:21:27.828 "flush": true, 00:21:27.828 "reset": true, 00:21:27.828 "nvme_admin": false, 00:21:27.828 "nvme_io": false, 00:21:27.828 "nvme_io_md": false, 00:21:27.828 "write_zeroes": true, 00:21:27.828 "zcopy": true, 00:21:27.828 "get_zone_info": false, 00:21:27.828 "zone_management": false, 00:21:27.828 "zone_append": false, 00:21:27.828 "compare": false, 00:21:27.828 "compare_and_write": false, 00:21:27.828 "abort": true, 00:21:27.828 "seek_hole": false, 00:21:27.828 "seek_data": false, 00:21:27.828 "copy": true, 00:21:27.828 "nvme_iov_md": false 00:21:27.828 }, 00:21:27.828 "memory_domains": [ 00:21:27.828 { 00:21:27.828 "dma_device_id": "system", 00:21:27.828 "dma_device_type": 1 00:21:27.828 }, 00:21:27.828 { 00:21:27.828 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:27.828 "dma_device_type": 2 00:21:27.828 } 00:21:27.828 ], 00:21:27.828 "driver_specific": {} 00:21:27.828 } 00:21:27.828 ] 00:21:27.828 13:41:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:27.828 13:41:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:21:27.828 13:41:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:21:27.828 13:41:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:27.828 13:41:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:27.828 13:41:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:27.828 13:41:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:27.828 13:41:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:27.828 13:41:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:27.828 13:41:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:27.828 13:41:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:27.828 13:41:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:27.828 13:41:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:27.828 13:41:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:27.828 13:41:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:27.828 13:41:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:27.828 13:41:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:27.828 13:41:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:27.828 "name": "Existed_Raid", 00:21:27.828 "uuid": "f4d22548-38e0-48a7-bb3c-1fb2935fa4c1", 00:21:27.828 "strip_size_kb": 64, 00:21:27.828 "state": "configuring", 00:21:27.828 "raid_level": "raid5f", 00:21:27.828 "superblock": true, 00:21:27.828 "num_base_bdevs": 4, 00:21:27.828 "num_base_bdevs_discovered": 1, 00:21:27.828 "num_base_bdevs_operational": 4, 00:21:27.828 "base_bdevs_list": [ 00:21:27.828 { 00:21:27.828 "name": "BaseBdev1", 00:21:27.828 "uuid": "32a8ea21-746c-4f83-b158-ee94593a5e17", 00:21:27.828 "is_configured": true, 00:21:27.828 "data_offset": 2048, 00:21:27.828 "data_size": 63488 00:21:27.828 }, 00:21:27.828 { 00:21:27.828 "name": "BaseBdev2", 00:21:27.828 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:27.828 "is_configured": false, 00:21:27.828 "data_offset": 0, 00:21:27.828 "data_size": 0 00:21:27.828 }, 00:21:27.828 { 00:21:27.828 "name": "BaseBdev3", 00:21:27.828 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:27.828 "is_configured": false, 00:21:27.828 "data_offset": 0, 00:21:27.828 "data_size": 0 00:21:27.828 }, 00:21:27.828 { 00:21:27.828 "name": "BaseBdev4", 00:21:27.828 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:27.828 "is_configured": false, 00:21:27.828 "data_offset": 0, 00:21:27.828 "data_size": 0 00:21:27.828 } 00:21:27.828 ] 00:21:27.828 }' 00:21:27.828 13:41:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:27.828 13:41:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:28.396 13:41:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:21:28.396 13:41:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.396 13:41:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:28.396 [2024-11-20 13:41:27.723226] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:28.396 [2024-11-20 13:41:27.723509] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:21:28.396 13:41:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.396 13:41:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:21:28.396 13:41:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.396 13:41:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:28.396 [2024-11-20 13:41:27.735339] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:28.396 [2024-11-20 13:41:27.737671] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:28.396 [2024-11-20 13:41:27.737727] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:28.396 [2024-11-20 13:41:27.737740] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:21:28.396 [2024-11-20 13:41:27.737757] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:21:28.396 [2024-11-20 13:41:27.737766] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:21:28.396 [2024-11-20 13:41:27.737779] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:21:28.396 13:41:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.396 13:41:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:21:28.396 13:41:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:21:28.396 13:41:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:21:28.396 13:41:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:28.396 13:41:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:28.396 13:41:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:28.396 13:41:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:28.396 13:41:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:28.396 13:41:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:28.396 13:41:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:28.396 13:41:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:28.396 13:41:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:28.396 13:41:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:28.396 13:41:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:28.396 13:41:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.396 13:41:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:28.396 13:41:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.397 13:41:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:28.397 "name": "Existed_Raid", 00:21:28.397 "uuid": "9239a495-a257-416f-92e8-2a65fcfe6be4", 00:21:28.397 "strip_size_kb": 64, 00:21:28.397 "state": "configuring", 00:21:28.397 "raid_level": "raid5f", 00:21:28.397 "superblock": true, 00:21:28.397 "num_base_bdevs": 4, 00:21:28.397 "num_base_bdevs_discovered": 1, 00:21:28.397 "num_base_bdevs_operational": 4, 00:21:28.397 "base_bdevs_list": [ 00:21:28.397 { 00:21:28.397 "name": "BaseBdev1", 00:21:28.397 "uuid": "32a8ea21-746c-4f83-b158-ee94593a5e17", 00:21:28.397 "is_configured": true, 00:21:28.397 "data_offset": 2048, 00:21:28.397 "data_size": 63488 00:21:28.397 }, 00:21:28.397 { 00:21:28.397 "name": "BaseBdev2", 00:21:28.397 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:28.397 "is_configured": false, 00:21:28.397 "data_offset": 0, 00:21:28.397 "data_size": 0 00:21:28.397 }, 00:21:28.397 { 00:21:28.397 "name": "BaseBdev3", 00:21:28.397 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:28.397 "is_configured": false, 00:21:28.397 "data_offset": 0, 00:21:28.397 "data_size": 0 00:21:28.397 }, 00:21:28.397 { 00:21:28.397 "name": "BaseBdev4", 00:21:28.397 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:28.397 "is_configured": false, 00:21:28.397 "data_offset": 0, 00:21:28.397 "data_size": 0 00:21:28.397 } 00:21:28.397 ] 00:21:28.397 }' 00:21:28.397 13:41:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:28.397 13:41:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:28.965 13:41:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:21:28.965 13:41:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.965 13:41:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:28.965 [2024-11-20 13:41:28.242283] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:28.965 BaseBdev2 00:21:28.965 13:41:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.965 13:41:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:21:28.965 13:41:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:21:28.965 13:41:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:21:28.965 13:41:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:21:28.965 13:41:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:21:28.965 13:41:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:21:28.965 13:41:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:21:28.965 13:41:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.965 13:41:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:28.965 13:41:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.965 13:41:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:21:28.965 13:41:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.965 13:41:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:28.965 [ 00:21:28.965 { 00:21:28.965 "name": "BaseBdev2", 00:21:28.966 "aliases": [ 00:21:28.966 "0c176c74-01dc-4a4a-89e7-c093bb06948c" 00:21:28.966 ], 00:21:28.966 "product_name": "Malloc disk", 00:21:28.966 "block_size": 512, 00:21:28.966 "num_blocks": 65536, 00:21:28.966 "uuid": "0c176c74-01dc-4a4a-89e7-c093bb06948c", 00:21:28.966 "assigned_rate_limits": { 00:21:28.966 "rw_ios_per_sec": 0, 00:21:28.966 "rw_mbytes_per_sec": 0, 00:21:28.966 "r_mbytes_per_sec": 0, 00:21:28.966 "w_mbytes_per_sec": 0 00:21:28.966 }, 00:21:28.966 "claimed": true, 00:21:28.966 "claim_type": "exclusive_write", 00:21:28.966 "zoned": false, 00:21:28.966 "supported_io_types": { 00:21:28.966 "read": true, 00:21:28.966 "write": true, 00:21:28.966 "unmap": true, 00:21:28.966 "flush": true, 00:21:28.966 "reset": true, 00:21:28.966 "nvme_admin": false, 00:21:28.966 "nvme_io": false, 00:21:28.966 "nvme_io_md": false, 00:21:28.966 "write_zeroes": true, 00:21:28.966 "zcopy": true, 00:21:28.966 "get_zone_info": false, 00:21:28.966 "zone_management": false, 00:21:28.966 "zone_append": false, 00:21:28.966 "compare": false, 00:21:28.966 "compare_and_write": false, 00:21:28.966 "abort": true, 00:21:28.966 "seek_hole": false, 00:21:28.966 "seek_data": false, 00:21:28.966 "copy": true, 00:21:28.966 "nvme_iov_md": false 00:21:28.966 }, 00:21:28.966 "memory_domains": [ 00:21:28.966 { 00:21:28.966 "dma_device_id": "system", 00:21:28.966 "dma_device_type": 1 00:21:28.966 }, 00:21:28.966 { 00:21:28.966 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:28.966 "dma_device_type": 2 00:21:28.966 } 00:21:28.966 ], 00:21:28.966 "driver_specific": {} 00:21:28.966 } 00:21:28.966 ] 00:21:28.966 13:41:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.966 13:41:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:21:28.966 13:41:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:21:28.966 13:41:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:21:28.966 13:41:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:21:28.966 13:41:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:28.966 13:41:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:28.966 13:41:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:28.966 13:41:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:28.966 13:41:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:28.966 13:41:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:28.966 13:41:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:28.966 13:41:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:28.966 13:41:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:28.966 13:41:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:28.966 13:41:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:28.966 13:41:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.966 13:41:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:28.966 13:41:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.966 13:41:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:28.966 "name": "Existed_Raid", 00:21:28.966 "uuid": "9239a495-a257-416f-92e8-2a65fcfe6be4", 00:21:28.966 "strip_size_kb": 64, 00:21:28.966 "state": "configuring", 00:21:28.966 "raid_level": "raid5f", 00:21:28.966 "superblock": true, 00:21:28.966 "num_base_bdevs": 4, 00:21:28.966 "num_base_bdevs_discovered": 2, 00:21:28.966 "num_base_bdevs_operational": 4, 00:21:28.966 "base_bdevs_list": [ 00:21:28.966 { 00:21:28.966 "name": "BaseBdev1", 00:21:28.966 "uuid": "32a8ea21-746c-4f83-b158-ee94593a5e17", 00:21:28.966 "is_configured": true, 00:21:28.966 "data_offset": 2048, 00:21:28.966 "data_size": 63488 00:21:28.966 }, 00:21:28.966 { 00:21:28.966 "name": "BaseBdev2", 00:21:28.966 "uuid": "0c176c74-01dc-4a4a-89e7-c093bb06948c", 00:21:28.966 "is_configured": true, 00:21:28.966 "data_offset": 2048, 00:21:28.966 "data_size": 63488 00:21:28.966 }, 00:21:28.966 { 00:21:28.966 "name": "BaseBdev3", 00:21:28.966 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:28.966 "is_configured": false, 00:21:28.966 "data_offset": 0, 00:21:28.966 "data_size": 0 00:21:28.966 }, 00:21:28.966 { 00:21:28.966 "name": "BaseBdev4", 00:21:28.966 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:28.966 "is_configured": false, 00:21:28.966 "data_offset": 0, 00:21:28.966 "data_size": 0 00:21:28.966 } 00:21:28.966 ] 00:21:28.966 }' 00:21:28.966 13:41:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:28.966 13:41:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:29.534 13:41:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:21:29.534 13:41:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:29.534 13:41:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:29.534 [2024-11-20 13:41:28.766821] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:29.534 BaseBdev3 00:21:29.534 13:41:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:29.534 13:41:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:21:29.534 13:41:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:21:29.534 13:41:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:21:29.534 13:41:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:21:29.534 13:41:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:21:29.534 13:41:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:21:29.534 13:41:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:21:29.534 13:41:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:29.534 13:41:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:29.534 13:41:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:29.534 13:41:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:21:29.534 13:41:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:29.534 13:41:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:29.534 [ 00:21:29.534 { 00:21:29.534 "name": "BaseBdev3", 00:21:29.534 "aliases": [ 00:21:29.534 "28982ba2-0b4a-41cf-ab0b-8fa336b41fbf" 00:21:29.534 ], 00:21:29.534 "product_name": "Malloc disk", 00:21:29.534 "block_size": 512, 00:21:29.534 "num_blocks": 65536, 00:21:29.534 "uuid": "28982ba2-0b4a-41cf-ab0b-8fa336b41fbf", 00:21:29.534 "assigned_rate_limits": { 00:21:29.534 "rw_ios_per_sec": 0, 00:21:29.534 "rw_mbytes_per_sec": 0, 00:21:29.534 "r_mbytes_per_sec": 0, 00:21:29.534 "w_mbytes_per_sec": 0 00:21:29.534 }, 00:21:29.534 "claimed": true, 00:21:29.534 "claim_type": "exclusive_write", 00:21:29.534 "zoned": false, 00:21:29.534 "supported_io_types": { 00:21:29.534 "read": true, 00:21:29.534 "write": true, 00:21:29.534 "unmap": true, 00:21:29.534 "flush": true, 00:21:29.534 "reset": true, 00:21:29.534 "nvme_admin": false, 00:21:29.534 "nvme_io": false, 00:21:29.534 "nvme_io_md": false, 00:21:29.534 "write_zeroes": true, 00:21:29.534 "zcopy": true, 00:21:29.534 "get_zone_info": false, 00:21:29.534 "zone_management": false, 00:21:29.534 "zone_append": false, 00:21:29.534 "compare": false, 00:21:29.534 "compare_and_write": false, 00:21:29.534 "abort": true, 00:21:29.534 "seek_hole": false, 00:21:29.534 "seek_data": false, 00:21:29.534 "copy": true, 00:21:29.534 "nvme_iov_md": false 00:21:29.534 }, 00:21:29.534 "memory_domains": [ 00:21:29.534 { 00:21:29.534 "dma_device_id": "system", 00:21:29.534 "dma_device_type": 1 00:21:29.534 }, 00:21:29.534 { 00:21:29.534 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:29.534 "dma_device_type": 2 00:21:29.534 } 00:21:29.534 ], 00:21:29.534 "driver_specific": {} 00:21:29.534 } 00:21:29.534 ] 00:21:29.534 13:41:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:29.534 13:41:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:21:29.534 13:41:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:21:29.534 13:41:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:21:29.534 13:41:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:21:29.534 13:41:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:29.534 13:41:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:29.534 13:41:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:29.534 13:41:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:29.534 13:41:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:29.534 13:41:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:29.534 13:41:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:29.535 13:41:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:29.535 13:41:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:29.535 13:41:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:29.535 13:41:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:29.535 13:41:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:29.535 13:41:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:29.535 13:41:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:29.535 13:41:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:29.535 "name": "Existed_Raid", 00:21:29.535 "uuid": "9239a495-a257-416f-92e8-2a65fcfe6be4", 00:21:29.535 "strip_size_kb": 64, 00:21:29.535 "state": "configuring", 00:21:29.535 "raid_level": "raid5f", 00:21:29.535 "superblock": true, 00:21:29.535 "num_base_bdevs": 4, 00:21:29.535 "num_base_bdevs_discovered": 3, 00:21:29.535 "num_base_bdevs_operational": 4, 00:21:29.535 "base_bdevs_list": [ 00:21:29.535 { 00:21:29.535 "name": "BaseBdev1", 00:21:29.535 "uuid": "32a8ea21-746c-4f83-b158-ee94593a5e17", 00:21:29.535 "is_configured": true, 00:21:29.535 "data_offset": 2048, 00:21:29.535 "data_size": 63488 00:21:29.535 }, 00:21:29.535 { 00:21:29.535 "name": "BaseBdev2", 00:21:29.535 "uuid": "0c176c74-01dc-4a4a-89e7-c093bb06948c", 00:21:29.535 "is_configured": true, 00:21:29.535 "data_offset": 2048, 00:21:29.535 "data_size": 63488 00:21:29.535 }, 00:21:29.535 { 00:21:29.535 "name": "BaseBdev3", 00:21:29.535 "uuid": "28982ba2-0b4a-41cf-ab0b-8fa336b41fbf", 00:21:29.535 "is_configured": true, 00:21:29.535 "data_offset": 2048, 00:21:29.535 "data_size": 63488 00:21:29.535 }, 00:21:29.535 { 00:21:29.535 "name": "BaseBdev4", 00:21:29.535 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:29.535 "is_configured": false, 00:21:29.535 "data_offset": 0, 00:21:29.535 "data_size": 0 00:21:29.535 } 00:21:29.535 ] 00:21:29.535 }' 00:21:29.535 13:41:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:29.535 13:41:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:29.794 13:41:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:21:29.794 13:41:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:29.794 13:41:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:30.054 [2024-11-20 13:41:29.279169] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:21:30.054 [2024-11-20 13:41:29.279476] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:21:30.054 [2024-11-20 13:41:29.279493] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:21:30.054 [2024-11-20 13:41:29.279771] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:21:30.054 BaseBdev4 00:21:30.054 13:41:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.054 13:41:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:21:30.054 13:41:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:21:30.054 13:41:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:21:30.054 13:41:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:21:30.054 13:41:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:21:30.054 13:41:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:21:30.054 13:41:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:21:30.054 13:41:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.054 13:41:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:30.054 [2024-11-20 13:41:29.286931] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:21:30.054 [2024-11-20 13:41:29.287258] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:21:30.054 [2024-11-20 13:41:29.287591] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:30.054 13:41:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.054 13:41:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:21:30.054 13:41:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.054 13:41:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:30.054 [ 00:21:30.054 { 00:21:30.054 "name": "BaseBdev4", 00:21:30.054 "aliases": [ 00:21:30.054 "3903fe4f-3a91-45e8-8867-73acb1def215" 00:21:30.054 ], 00:21:30.054 "product_name": "Malloc disk", 00:21:30.054 "block_size": 512, 00:21:30.054 "num_blocks": 65536, 00:21:30.054 "uuid": "3903fe4f-3a91-45e8-8867-73acb1def215", 00:21:30.054 "assigned_rate_limits": { 00:21:30.054 "rw_ios_per_sec": 0, 00:21:30.054 "rw_mbytes_per_sec": 0, 00:21:30.054 "r_mbytes_per_sec": 0, 00:21:30.054 "w_mbytes_per_sec": 0 00:21:30.054 }, 00:21:30.054 "claimed": true, 00:21:30.054 "claim_type": "exclusive_write", 00:21:30.054 "zoned": false, 00:21:30.054 "supported_io_types": { 00:21:30.054 "read": true, 00:21:30.054 "write": true, 00:21:30.054 "unmap": true, 00:21:30.054 "flush": true, 00:21:30.054 "reset": true, 00:21:30.054 "nvme_admin": false, 00:21:30.054 "nvme_io": false, 00:21:30.054 "nvme_io_md": false, 00:21:30.054 "write_zeroes": true, 00:21:30.054 "zcopy": true, 00:21:30.054 "get_zone_info": false, 00:21:30.054 "zone_management": false, 00:21:30.054 "zone_append": false, 00:21:30.054 "compare": false, 00:21:30.054 "compare_and_write": false, 00:21:30.054 "abort": true, 00:21:30.054 "seek_hole": false, 00:21:30.054 "seek_data": false, 00:21:30.054 "copy": true, 00:21:30.054 "nvme_iov_md": false 00:21:30.054 }, 00:21:30.054 "memory_domains": [ 00:21:30.054 { 00:21:30.054 "dma_device_id": "system", 00:21:30.054 "dma_device_type": 1 00:21:30.054 }, 00:21:30.054 { 00:21:30.054 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:30.054 "dma_device_type": 2 00:21:30.054 } 00:21:30.054 ], 00:21:30.054 "driver_specific": {} 00:21:30.054 } 00:21:30.054 ] 00:21:30.054 13:41:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.054 13:41:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:21:30.054 13:41:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:21:30.054 13:41:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:21:30.054 13:41:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:21:30.054 13:41:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:30.054 13:41:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:30.054 13:41:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:30.054 13:41:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:30.054 13:41:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:30.054 13:41:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:30.054 13:41:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:30.054 13:41:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:30.054 13:41:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:30.054 13:41:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:30.054 13:41:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:30.054 13:41:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.054 13:41:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:30.054 13:41:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.054 13:41:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:30.054 "name": "Existed_Raid", 00:21:30.054 "uuid": "9239a495-a257-416f-92e8-2a65fcfe6be4", 00:21:30.054 "strip_size_kb": 64, 00:21:30.054 "state": "online", 00:21:30.054 "raid_level": "raid5f", 00:21:30.054 "superblock": true, 00:21:30.054 "num_base_bdevs": 4, 00:21:30.054 "num_base_bdevs_discovered": 4, 00:21:30.054 "num_base_bdevs_operational": 4, 00:21:30.054 "base_bdevs_list": [ 00:21:30.054 { 00:21:30.054 "name": "BaseBdev1", 00:21:30.054 "uuid": "32a8ea21-746c-4f83-b158-ee94593a5e17", 00:21:30.054 "is_configured": true, 00:21:30.054 "data_offset": 2048, 00:21:30.055 "data_size": 63488 00:21:30.055 }, 00:21:30.055 { 00:21:30.055 "name": "BaseBdev2", 00:21:30.055 "uuid": "0c176c74-01dc-4a4a-89e7-c093bb06948c", 00:21:30.055 "is_configured": true, 00:21:30.055 "data_offset": 2048, 00:21:30.055 "data_size": 63488 00:21:30.055 }, 00:21:30.055 { 00:21:30.055 "name": "BaseBdev3", 00:21:30.055 "uuid": "28982ba2-0b4a-41cf-ab0b-8fa336b41fbf", 00:21:30.055 "is_configured": true, 00:21:30.055 "data_offset": 2048, 00:21:30.055 "data_size": 63488 00:21:30.055 }, 00:21:30.055 { 00:21:30.055 "name": "BaseBdev4", 00:21:30.055 "uuid": "3903fe4f-3a91-45e8-8867-73acb1def215", 00:21:30.055 "is_configured": true, 00:21:30.055 "data_offset": 2048, 00:21:30.055 "data_size": 63488 00:21:30.055 } 00:21:30.055 ] 00:21:30.055 }' 00:21:30.055 13:41:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:30.055 13:41:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:30.356 13:41:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:21:30.356 13:41:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:21:30.356 13:41:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:21:30.356 13:41:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:21:30.356 13:41:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:21:30.356 13:41:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:21:30.356 13:41:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:21:30.356 13:41:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:21:30.356 13:41:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.356 13:41:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:30.356 [2024-11-20 13:41:29.808298] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:30.614 13:41:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.614 13:41:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:30.614 "name": "Existed_Raid", 00:21:30.614 "aliases": [ 00:21:30.614 "9239a495-a257-416f-92e8-2a65fcfe6be4" 00:21:30.614 ], 00:21:30.614 "product_name": "Raid Volume", 00:21:30.614 "block_size": 512, 00:21:30.614 "num_blocks": 190464, 00:21:30.614 "uuid": "9239a495-a257-416f-92e8-2a65fcfe6be4", 00:21:30.614 "assigned_rate_limits": { 00:21:30.614 "rw_ios_per_sec": 0, 00:21:30.614 "rw_mbytes_per_sec": 0, 00:21:30.614 "r_mbytes_per_sec": 0, 00:21:30.614 "w_mbytes_per_sec": 0 00:21:30.614 }, 00:21:30.615 "claimed": false, 00:21:30.615 "zoned": false, 00:21:30.615 "supported_io_types": { 00:21:30.615 "read": true, 00:21:30.615 "write": true, 00:21:30.615 "unmap": false, 00:21:30.615 "flush": false, 00:21:30.615 "reset": true, 00:21:30.615 "nvme_admin": false, 00:21:30.615 "nvme_io": false, 00:21:30.615 "nvme_io_md": false, 00:21:30.615 "write_zeroes": true, 00:21:30.615 "zcopy": false, 00:21:30.615 "get_zone_info": false, 00:21:30.615 "zone_management": false, 00:21:30.615 "zone_append": false, 00:21:30.615 "compare": false, 00:21:30.615 "compare_and_write": false, 00:21:30.615 "abort": false, 00:21:30.615 "seek_hole": false, 00:21:30.615 "seek_data": false, 00:21:30.615 "copy": false, 00:21:30.615 "nvme_iov_md": false 00:21:30.615 }, 00:21:30.615 "driver_specific": { 00:21:30.615 "raid": { 00:21:30.615 "uuid": "9239a495-a257-416f-92e8-2a65fcfe6be4", 00:21:30.615 "strip_size_kb": 64, 00:21:30.615 "state": "online", 00:21:30.615 "raid_level": "raid5f", 00:21:30.615 "superblock": true, 00:21:30.615 "num_base_bdevs": 4, 00:21:30.615 "num_base_bdevs_discovered": 4, 00:21:30.615 "num_base_bdevs_operational": 4, 00:21:30.615 "base_bdevs_list": [ 00:21:30.615 { 00:21:30.615 "name": "BaseBdev1", 00:21:30.615 "uuid": "32a8ea21-746c-4f83-b158-ee94593a5e17", 00:21:30.615 "is_configured": true, 00:21:30.615 "data_offset": 2048, 00:21:30.615 "data_size": 63488 00:21:30.615 }, 00:21:30.615 { 00:21:30.615 "name": "BaseBdev2", 00:21:30.615 "uuid": "0c176c74-01dc-4a4a-89e7-c093bb06948c", 00:21:30.615 "is_configured": true, 00:21:30.615 "data_offset": 2048, 00:21:30.615 "data_size": 63488 00:21:30.615 }, 00:21:30.615 { 00:21:30.615 "name": "BaseBdev3", 00:21:30.615 "uuid": "28982ba2-0b4a-41cf-ab0b-8fa336b41fbf", 00:21:30.615 "is_configured": true, 00:21:30.615 "data_offset": 2048, 00:21:30.615 "data_size": 63488 00:21:30.615 }, 00:21:30.615 { 00:21:30.615 "name": "BaseBdev4", 00:21:30.615 "uuid": "3903fe4f-3a91-45e8-8867-73acb1def215", 00:21:30.615 "is_configured": true, 00:21:30.615 "data_offset": 2048, 00:21:30.615 "data_size": 63488 00:21:30.615 } 00:21:30.615 ] 00:21:30.615 } 00:21:30.615 } 00:21:30.615 }' 00:21:30.615 13:41:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:30.615 13:41:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:21:30.615 BaseBdev2 00:21:30.615 BaseBdev3 00:21:30.615 BaseBdev4' 00:21:30.615 13:41:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:30.615 13:41:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:21:30.615 13:41:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:30.615 13:41:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:21:30.615 13:41:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.615 13:41:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:30.615 13:41:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:30.615 13:41:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.615 13:41:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:30.615 13:41:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:30.615 13:41:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:30.615 13:41:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:30.615 13:41:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:21:30.615 13:41:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.615 13:41:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:30.615 13:41:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.615 13:41:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:30.615 13:41:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:30.615 13:41:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:30.615 13:41:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:21:30.615 13:41:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.615 13:41:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:30.615 13:41:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:30.615 13:41:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.875 13:41:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:30.875 13:41:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:30.875 13:41:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:30.875 13:41:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:21:30.875 13:41:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:30.875 13:41:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.875 13:41:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:30.875 13:41:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.875 13:41:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:30.875 13:41:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:30.875 13:41:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:21:30.875 13:41:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.875 13:41:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:30.875 [2024-11-20 13:41:30.188042] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:30.875 13:41:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.875 13:41:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:21:30.875 13:41:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:21:30.875 13:41:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:21:30.875 13:41:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:21:30.875 13:41:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:21:30.875 13:41:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:21:30.875 13:41:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:30.875 13:41:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:30.875 13:41:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:30.875 13:41:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:30.875 13:41:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:30.875 13:41:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:30.875 13:41:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:30.875 13:41:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:30.875 13:41:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:30.875 13:41:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:30.875 13:41:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:30.875 13:41:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.875 13:41:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:30.875 13:41:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.875 13:41:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:30.875 "name": "Existed_Raid", 00:21:30.875 "uuid": "9239a495-a257-416f-92e8-2a65fcfe6be4", 00:21:30.875 "strip_size_kb": 64, 00:21:30.875 "state": "online", 00:21:30.875 "raid_level": "raid5f", 00:21:30.875 "superblock": true, 00:21:30.875 "num_base_bdevs": 4, 00:21:30.875 "num_base_bdevs_discovered": 3, 00:21:30.875 "num_base_bdevs_operational": 3, 00:21:30.875 "base_bdevs_list": [ 00:21:30.875 { 00:21:30.875 "name": null, 00:21:30.875 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:30.875 "is_configured": false, 00:21:30.875 "data_offset": 0, 00:21:30.875 "data_size": 63488 00:21:30.875 }, 00:21:30.875 { 00:21:30.875 "name": "BaseBdev2", 00:21:30.875 "uuid": "0c176c74-01dc-4a4a-89e7-c093bb06948c", 00:21:30.875 "is_configured": true, 00:21:30.875 "data_offset": 2048, 00:21:30.876 "data_size": 63488 00:21:30.876 }, 00:21:30.876 { 00:21:30.876 "name": "BaseBdev3", 00:21:30.876 "uuid": "28982ba2-0b4a-41cf-ab0b-8fa336b41fbf", 00:21:30.876 "is_configured": true, 00:21:30.876 "data_offset": 2048, 00:21:30.876 "data_size": 63488 00:21:30.876 }, 00:21:30.876 { 00:21:30.876 "name": "BaseBdev4", 00:21:30.876 "uuid": "3903fe4f-3a91-45e8-8867-73acb1def215", 00:21:30.876 "is_configured": true, 00:21:30.876 "data_offset": 2048, 00:21:30.876 "data_size": 63488 00:21:30.876 } 00:21:30.876 ] 00:21:30.876 }' 00:21:30.876 13:41:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:30.876 13:41:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:31.444 13:41:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:21:31.444 13:41:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:21:31.444 13:41:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:21:31.444 13:41:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:31.444 13:41:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.444 13:41:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:31.444 13:41:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:31.444 13:41:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:21:31.444 13:41:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:21:31.444 13:41:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:21:31.444 13:41:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.444 13:41:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:31.444 [2024-11-20 13:41:30.899234] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:21:31.444 [2024-11-20 13:41:30.899668] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:31.703 [2024-11-20 13:41:31.025002] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:31.703 13:41:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:31.703 13:41:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:21:31.703 13:41:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:21:31.703 13:41:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:21:31.703 13:41:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:31.703 13:41:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.703 13:41:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:31.703 13:41:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:31.703 13:41:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:21:31.703 13:41:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:21:31.703 13:41:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:21:31.703 13:41:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.703 13:41:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:31.703 [2024-11-20 13:41:31.092595] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:21:31.961 13:41:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:31.961 13:41:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:21:31.961 13:41:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:21:31.961 13:41:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:31.961 13:41:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:21:31.961 13:41:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.961 13:41:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:31.961 13:41:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:31.961 13:41:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:21:31.961 13:41:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:21:31.961 13:41:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:21:31.961 13:41:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.961 13:41:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:31.961 [2024-11-20 13:41:31.257921] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:21:31.961 [2024-11-20 13:41:31.257989] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:21:31.961 13:41:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:31.961 13:41:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:21:31.961 13:41:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:21:31.961 13:41:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:31.961 13:41:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:21:31.961 13:41:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.961 13:41:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:31.961 13:41:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:31.961 13:41:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:21:31.961 13:41:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:21:31.961 13:41:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:21:31.961 13:41:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:21:31.961 13:41:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:21:31.961 13:41:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:21:31.961 13:41:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.961 13:41:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:32.220 BaseBdev2 00:21:32.220 13:41:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.220 13:41:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:21:32.220 13:41:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:21:32.220 13:41:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:21:32.220 13:41:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:21:32.220 13:41:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:21:32.220 13:41:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:21:32.220 13:41:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:21:32.220 13:41:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.220 13:41:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:32.220 13:41:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.220 13:41:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:21:32.220 13:41:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.220 13:41:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:32.220 [ 00:21:32.220 { 00:21:32.220 "name": "BaseBdev2", 00:21:32.220 "aliases": [ 00:21:32.220 "279f81fb-37c9-45a0-bdaa-58edb2a6928b" 00:21:32.220 ], 00:21:32.220 "product_name": "Malloc disk", 00:21:32.220 "block_size": 512, 00:21:32.220 "num_blocks": 65536, 00:21:32.220 "uuid": "279f81fb-37c9-45a0-bdaa-58edb2a6928b", 00:21:32.221 "assigned_rate_limits": { 00:21:32.221 "rw_ios_per_sec": 0, 00:21:32.221 "rw_mbytes_per_sec": 0, 00:21:32.221 "r_mbytes_per_sec": 0, 00:21:32.221 "w_mbytes_per_sec": 0 00:21:32.221 }, 00:21:32.221 "claimed": false, 00:21:32.221 "zoned": false, 00:21:32.221 "supported_io_types": { 00:21:32.221 "read": true, 00:21:32.221 "write": true, 00:21:32.221 "unmap": true, 00:21:32.221 "flush": true, 00:21:32.221 "reset": true, 00:21:32.221 "nvme_admin": false, 00:21:32.221 "nvme_io": false, 00:21:32.221 "nvme_io_md": false, 00:21:32.221 "write_zeroes": true, 00:21:32.221 "zcopy": true, 00:21:32.221 "get_zone_info": false, 00:21:32.221 "zone_management": false, 00:21:32.221 "zone_append": false, 00:21:32.221 "compare": false, 00:21:32.221 "compare_and_write": false, 00:21:32.221 "abort": true, 00:21:32.221 "seek_hole": false, 00:21:32.221 "seek_data": false, 00:21:32.221 "copy": true, 00:21:32.221 "nvme_iov_md": false 00:21:32.221 }, 00:21:32.221 "memory_domains": [ 00:21:32.221 { 00:21:32.221 "dma_device_id": "system", 00:21:32.221 "dma_device_type": 1 00:21:32.221 }, 00:21:32.221 { 00:21:32.221 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:32.221 "dma_device_type": 2 00:21:32.221 } 00:21:32.221 ], 00:21:32.221 "driver_specific": {} 00:21:32.221 } 00:21:32.221 ] 00:21:32.221 13:41:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.221 13:41:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:21:32.221 13:41:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:21:32.221 13:41:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:21:32.221 13:41:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:21:32.221 13:41:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.221 13:41:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:32.221 BaseBdev3 00:21:32.221 13:41:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.221 13:41:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:21:32.221 13:41:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:21:32.221 13:41:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:21:32.221 13:41:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:21:32.221 13:41:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:21:32.221 13:41:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:21:32.221 13:41:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:21:32.221 13:41:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.221 13:41:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:32.221 13:41:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.221 13:41:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:21:32.221 13:41:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.221 13:41:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:32.221 [ 00:21:32.221 { 00:21:32.221 "name": "BaseBdev3", 00:21:32.221 "aliases": [ 00:21:32.221 "af188a15-760d-4911-b896-87fd5caa4586" 00:21:32.221 ], 00:21:32.221 "product_name": "Malloc disk", 00:21:32.221 "block_size": 512, 00:21:32.221 "num_blocks": 65536, 00:21:32.221 "uuid": "af188a15-760d-4911-b896-87fd5caa4586", 00:21:32.221 "assigned_rate_limits": { 00:21:32.221 "rw_ios_per_sec": 0, 00:21:32.221 "rw_mbytes_per_sec": 0, 00:21:32.221 "r_mbytes_per_sec": 0, 00:21:32.221 "w_mbytes_per_sec": 0 00:21:32.221 }, 00:21:32.221 "claimed": false, 00:21:32.221 "zoned": false, 00:21:32.221 "supported_io_types": { 00:21:32.221 "read": true, 00:21:32.221 "write": true, 00:21:32.221 "unmap": true, 00:21:32.221 "flush": true, 00:21:32.221 "reset": true, 00:21:32.221 "nvme_admin": false, 00:21:32.221 "nvme_io": false, 00:21:32.221 "nvme_io_md": false, 00:21:32.221 "write_zeroes": true, 00:21:32.221 "zcopy": true, 00:21:32.221 "get_zone_info": false, 00:21:32.221 "zone_management": false, 00:21:32.221 "zone_append": false, 00:21:32.221 "compare": false, 00:21:32.221 "compare_and_write": false, 00:21:32.221 "abort": true, 00:21:32.221 "seek_hole": false, 00:21:32.221 "seek_data": false, 00:21:32.221 "copy": true, 00:21:32.221 "nvme_iov_md": false 00:21:32.221 }, 00:21:32.221 "memory_domains": [ 00:21:32.221 { 00:21:32.221 "dma_device_id": "system", 00:21:32.221 "dma_device_type": 1 00:21:32.221 }, 00:21:32.221 { 00:21:32.221 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:32.221 "dma_device_type": 2 00:21:32.221 } 00:21:32.221 ], 00:21:32.221 "driver_specific": {} 00:21:32.221 } 00:21:32.221 ] 00:21:32.221 13:41:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.221 13:41:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:21:32.221 13:41:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:21:32.221 13:41:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:21:32.221 13:41:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:21:32.221 13:41:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.221 13:41:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:32.221 BaseBdev4 00:21:32.221 13:41:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.221 13:41:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:21:32.221 13:41:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:21:32.221 13:41:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:21:32.221 13:41:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:21:32.221 13:41:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:21:32.221 13:41:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:21:32.221 13:41:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:21:32.221 13:41:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.221 13:41:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:32.221 13:41:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.221 13:41:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:21:32.221 13:41:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.221 13:41:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:32.221 [ 00:21:32.221 { 00:21:32.221 "name": "BaseBdev4", 00:21:32.221 "aliases": [ 00:21:32.221 "8f71a23b-5db0-41c4-85a2-b180cc5c936a" 00:21:32.221 ], 00:21:32.221 "product_name": "Malloc disk", 00:21:32.222 "block_size": 512, 00:21:32.222 "num_blocks": 65536, 00:21:32.222 "uuid": "8f71a23b-5db0-41c4-85a2-b180cc5c936a", 00:21:32.222 "assigned_rate_limits": { 00:21:32.222 "rw_ios_per_sec": 0, 00:21:32.222 "rw_mbytes_per_sec": 0, 00:21:32.222 "r_mbytes_per_sec": 0, 00:21:32.222 "w_mbytes_per_sec": 0 00:21:32.222 }, 00:21:32.222 "claimed": false, 00:21:32.222 "zoned": false, 00:21:32.222 "supported_io_types": { 00:21:32.222 "read": true, 00:21:32.222 "write": true, 00:21:32.222 "unmap": true, 00:21:32.222 "flush": true, 00:21:32.222 "reset": true, 00:21:32.222 "nvme_admin": false, 00:21:32.222 "nvme_io": false, 00:21:32.222 "nvme_io_md": false, 00:21:32.222 "write_zeroes": true, 00:21:32.222 "zcopy": true, 00:21:32.222 "get_zone_info": false, 00:21:32.222 "zone_management": false, 00:21:32.222 "zone_append": false, 00:21:32.222 "compare": false, 00:21:32.222 "compare_and_write": false, 00:21:32.222 "abort": true, 00:21:32.222 "seek_hole": false, 00:21:32.222 "seek_data": false, 00:21:32.222 "copy": true, 00:21:32.222 "nvme_iov_md": false 00:21:32.222 }, 00:21:32.222 "memory_domains": [ 00:21:32.222 { 00:21:32.222 "dma_device_id": "system", 00:21:32.222 "dma_device_type": 1 00:21:32.480 }, 00:21:32.480 { 00:21:32.480 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:32.480 "dma_device_type": 2 00:21:32.480 } 00:21:32.480 ], 00:21:32.480 "driver_specific": {} 00:21:32.480 } 00:21:32.480 ] 00:21:32.480 13:41:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.480 13:41:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:21:32.480 13:41:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:21:32.480 13:41:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:21:32.480 13:41:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:21:32.480 13:41:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.480 13:41:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:32.480 [2024-11-20 13:41:31.713308] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:32.480 [2024-11-20 13:41:31.713617] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:32.480 [2024-11-20 13:41:31.713756] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:32.480 [2024-11-20 13:41:31.716121] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:32.480 [2024-11-20 13:41:31.716328] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:21:32.480 13:41:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.480 13:41:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:21:32.480 13:41:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:32.480 13:41:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:32.480 13:41:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:32.480 13:41:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:32.480 13:41:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:32.480 13:41:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:32.480 13:41:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:32.480 13:41:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:32.480 13:41:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:32.480 13:41:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:32.480 13:41:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:32.480 13:41:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.480 13:41:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:32.480 13:41:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.480 13:41:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:32.480 "name": "Existed_Raid", 00:21:32.481 "uuid": "107f9780-0ff9-4c5b-825f-2e590390f014", 00:21:32.481 "strip_size_kb": 64, 00:21:32.481 "state": "configuring", 00:21:32.481 "raid_level": "raid5f", 00:21:32.481 "superblock": true, 00:21:32.481 "num_base_bdevs": 4, 00:21:32.481 "num_base_bdevs_discovered": 3, 00:21:32.481 "num_base_bdevs_operational": 4, 00:21:32.481 "base_bdevs_list": [ 00:21:32.481 { 00:21:32.481 "name": "BaseBdev1", 00:21:32.481 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:32.481 "is_configured": false, 00:21:32.481 "data_offset": 0, 00:21:32.481 "data_size": 0 00:21:32.481 }, 00:21:32.481 { 00:21:32.481 "name": "BaseBdev2", 00:21:32.481 "uuid": "279f81fb-37c9-45a0-bdaa-58edb2a6928b", 00:21:32.481 "is_configured": true, 00:21:32.481 "data_offset": 2048, 00:21:32.481 "data_size": 63488 00:21:32.481 }, 00:21:32.481 { 00:21:32.481 "name": "BaseBdev3", 00:21:32.481 "uuid": "af188a15-760d-4911-b896-87fd5caa4586", 00:21:32.481 "is_configured": true, 00:21:32.481 "data_offset": 2048, 00:21:32.481 "data_size": 63488 00:21:32.481 }, 00:21:32.481 { 00:21:32.481 "name": "BaseBdev4", 00:21:32.481 "uuid": "8f71a23b-5db0-41c4-85a2-b180cc5c936a", 00:21:32.481 "is_configured": true, 00:21:32.481 "data_offset": 2048, 00:21:32.481 "data_size": 63488 00:21:32.481 } 00:21:32.481 ] 00:21:32.481 }' 00:21:32.481 13:41:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:32.481 13:41:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:32.740 13:41:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:21:32.740 13:41:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.740 13:41:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:32.740 [2024-11-20 13:41:32.160626] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:21:32.740 13:41:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.740 13:41:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:21:32.740 13:41:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:32.740 13:41:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:32.740 13:41:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:32.740 13:41:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:32.740 13:41:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:32.740 13:41:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:32.740 13:41:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:32.740 13:41:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:32.740 13:41:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:32.740 13:41:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:32.740 13:41:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:32.740 13:41:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.740 13:41:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:32.740 13:41:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.740 13:41:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:32.740 "name": "Existed_Raid", 00:21:32.740 "uuid": "107f9780-0ff9-4c5b-825f-2e590390f014", 00:21:32.740 "strip_size_kb": 64, 00:21:32.740 "state": "configuring", 00:21:32.740 "raid_level": "raid5f", 00:21:32.740 "superblock": true, 00:21:32.740 "num_base_bdevs": 4, 00:21:32.740 "num_base_bdevs_discovered": 2, 00:21:32.740 "num_base_bdevs_operational": 4, 00:21:32.740 "base_bdevs_list": [ 00:21:32.740 { 00:21:32.740 "name": "BaseBdev1", 00:21:32.740 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:32.740 "is_configured": false, 00:21:32.740 "data_offset": 0, 00:21:32.740 "data_size": 0 00:21:32.740 }, 00:21:32.740 { 00:21:32.740 "name": null, 00:21:32.740 "uuid": "279f81fb-37c9-45a0-bdaa-58edb2a6928b", 00:21:32.740 "is_configured": false, 00:21:32.740 "data_offset": 0, 00:21:32.740 "data_size": 63488 00:21:32.740 }, 00:21:32.740 { 00:21:32.740 "name": "BaseBdev3", 00:21:32.740 "uuid": "af188a15-760d-4911-b896-87fd5caa4586", 00:21:32.740 "is_configured": true, 00:21:32.740 "data_offset": 2048, 00:21:32.740 "data_size": 63488 00:21:32.740 }, 00:21:32.740 { 00:21:32.740 "name": "BaseBdev4", 00:21:32.740 "uuid": "8f71a23b-5db0-41c4-85a2-b180cc5c936a", 00:21:32.740 "is_configured": true, 00:21:32.740 "data_offset": 2048, 00:21:32.740 "data_size": 63488 00:21:32.740 } 00:21:32.740 ] 00:21:32.740 }' 00:21:32.740 13:41:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:32.740 13:41:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:33.311 13:41:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:21:33.311 13:41:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:33.311 13:41:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:33.311 13:41:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:33.311 13:41:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:33.311 13:41:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:21:33.311 13:41:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:21:33.311 13:41:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:33.311 13:41:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:33.311 [2024-11-20 13:41:32.736151] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:33.311 BaseBdev1 00:21:33.311 13:41:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:33.311 13:41:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:21:33.311 13:41:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:21:33.311 13:41:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:21:33.311 13:41:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:21:33.311 13:41:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:21:33.311 13:41:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:21:33.311 13:41:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:21:33.311 13:41:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:33.311 13:41:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:33.311 13:41:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:33.311 13:41:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:21:33.311 13:41:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:33.311 13:41:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:33.311 [ 00:21:33.311 { 00:21:33.311 "name": "BaseBdev1", 00:21:33.311 "aliases": [ 00:21:33.311 "84ed169c-c725-4de2-ae2b-f62541e7487b" 00:21:33.311 ], 00:21:33.311 "product_name": "Malloc disk", 00:21:33.311 "block_size": 512, 00:21:33.311 "num_blocks": 65536, 00:21:33.311 "uuid": "84ed169c-c725-4de2-ae2b-f62541e7487b", 00:21:33.311 "assigned_rate_limits": { 00:21:33.311 "rw_ios_per_sec": 0, 00:21:33.311 "rw_mbytes_per_sec": 0, 00:21:33.311 "r_mbytes_per_sec": 0, 00:21:33.311 "w_mbytes_per_sec": 0 00:21:33.311 }, 00:21:33.311 "claimed": true, 00:21:33.311 "claim_type": "exclusive_write", 00:21:33.311 "zoned": false, 00:21:33.311 "supported_io_types": { 00:21:33.311 "read": true, 00:21:33.311 "write": true, 00:21:33.311 "unmap": true, 00:21:33.311 "flush": true, 00:21:33.311 "reset": true, 00:21:33.311 "nvme_admin": false, 00:21:33.311 "nvme_io": false, 00:21:33.311 "nvme_io_md": false, 00:21:33.311 "write_zeroes": true, 00:21:33.311 "zcopy": true, 00:21:33.311 "get_zone_info": false, 00:21:33.311 "zone_management": false, 00:21:33.311 "zone_append": false, 00:21:33.311 "compare": false, 00:21:33.311 "compare_and_write": false, 00:21:33.311 "abort": true, 00:21:33.311 "seek_hole": false, 00:21:33.311 "seek_data": false, 00:21:33.311 "copy": true, 00:21:33.311 "nvme_iov_md": false 00:21:33.311 }, 00:21:33.311 "memory_domains": [ 00:21:33.311 { 00:21:33.311 "dma_device_id": "system", 00:21:33.312 "dma_device_type": 1 00:21:33.312 }, 00:21:33.312 { 00:21:33.312 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:33.312 "dma_device_type": 2 00:21:33.312 } 00:21:33.312 ], 00:21:33.312 "driver_specific": {} 00:21:33.312 } 00:21:33.312 ] 00:21:33.312 13:41:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:33.312 13:41:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:21:33.312 13:41:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:21:33.312 13:41:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:33.312 13:41:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:33.312 13:41:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:33.312 13:41:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:33.312 13:41:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:33.312 13:41:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:33.312 13:41:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:33.312 13:41:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:33.312 13:41:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:33.571 13:41:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:33.571 13:41:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:33.571 13:41:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:33.571 13:41:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:33.571 13:41:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:33.571 13:41:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:33.571 "name": "Existed_Raid", 00:21:33.571 "uuid": "107f9780-0ff9-4c5b-825f-2e590390f014", 00:21:33.571 "strip_size_kb": 64, 00:21:33.571 "state": "configuring", 00:21:33.571 "raid_level": "raid5f", 00:21:33.571 "superblock": true, 00:21:33.571 "num_base_bdevs": 4, 00:21:33.571 "num_base_bdevs_discovered": 3, 00:21:33.571 "num_base_bdevs_operational": 4, 00:21:33.571 "base_bdevs_list": [ 00:21:33.571 { 00:21:33.571 "name": "BaseBdev1", 00:21:33.571 "uuid": "84ed169c-c725-4de2-ae2b-f62541e7487b", 00:21:33.571 "is_configured": true, 00:21:33.571 "data_offset": 2048, 00:21:33.571 "data_size": 63488 00:21:33.571 }, 00:21:33.571 { 00:21:33.571 "name": null, 00:21:33.571 "uuid": "279f81fb-37c9-45a0-bdaa-58edb2a6928b", 00:21:33.571 "is_configured": false, 00:21:33.571 "data_offset": 0, 00:21:33.571 "data_size": 63488 00:21:33.571 }, 00:21:33.571 { 00:21:33.571 "name": "BaseBdev3", 00:21:33.571 "uuid": "af188a15-760d-4911-b896-87fd5caa4586", 00:21:33.571 "is_configured": true, 00:21:33.571 "data_offset": 2048, 00:21:33.571 "data_size": 63488 00:21:33.571 }, 00:21:33.571 { 00:21:33.571 "name": "BaseBdev4", 00:21:33.571 "uuid": "8f71a23b-5db0-41c4-85a2-b180cc5c936a", 00:21:33.571 "is_configured": true, 00:21:33.571 "data_offset": 2048, 00:21:33.571 "data_size": 63488 00:21:33.571 } 00:21:33.571 ] 00:21:33.571 }' 00:21:33.571 13:41:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:33.571 13:41:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:33.829 13:41:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:33.829 13:41:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:33.829 13:41:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:33.829 13:41:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:21:33.829 13:41:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:33.829 13:41:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:21:33.829 13:41:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:21:33.829 13:41:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:33.829 13:41:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:33.829 [2024-11-20 13:41:33.291534] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:21:33.829 13:41:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:33.829 13:41:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:21:33.829 13:41:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:33.829 13:41:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:33.829 13:41:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:33.829 13:41:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:33.829 13:41:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:33.829 13:41:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:33.829 13:41:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:33.829 13:41:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:33.829 13:41:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:33.829 13:41:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:33.830 13:41:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:33.830 13:41:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:33.830 13:41:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:34.087 13:41:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:34.087 13:41:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:34.087 "name": "Existed_Raid", 00:21:34.087 "uuid": "107f9780-0ff9-4c5b-825f-2e590390f014", 00:21:34.087 "strip_size_kb": 64, 00:21:34.087 "state": "configuring", 00:21:34.087 "raid_level": "raid5f", 00:21:34.087 "superblock": true, 00:21:34.088 "num_base_bdevs": 4, 00:21:34.088 "num_base_bdevs_discovered": 2, 00:21:34.088 "num_base_bdevs_operational": 4, 00:21:34.088 "base_bdevs_list": [ 00:21:34.088 { 00:21:34.088 "name": "BaseBdev1", 00:21:34.088 "uuid": "84ed169c-c725-4de2-ae2b-f62541e7487b", 00:21:34.088 "is_configured": true, 00:21:34.088 "data_offset": 2048, 00:21:34.088 "data_size": 63488 00:21:34.088 }, 00:21:34.088 { 00:21:34.088 "name": null, 00:21:34.088 "uuid": "279f81fb-37c9-45a0-bdaa-58edb2a6928b", 00:21:34.088 "is_configured": false, 00:21:34.088 "data_offset": 0, 00:21:34.088 "data_size": 63488 00:21:34.088 }, 00:21:34.088 { 00:21:34.088 "name": null, 00:21:34.088 "uuid": "af188a15-760d-4911-b896-87fd5caa4586", 00:21:34.088 "is_configured": false, 00:21:34.088 "data_offset": 0, 00:21:34.088 "data_size": 63488 00:21:34.088 }, 00:21:34.088 { 00:21:34.088 "name": "BaseBdev4", 00:21:34.088 "uuid": "8f71a23b-5db0-41c4-85a2-b180cc5c936a", 00:21:34.088 "is_configured": true, 00:21:34.088 "data_offset": 2048, 00:21:34.088 "data_size": 63488 00:21:34.088 } 00:21:34.088 ] 00:21:34.088 }' 00:21:34.088 13:41:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:34.088 13:41:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:34.346 13:41:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:34.346 13:41:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:21:34.346 13:41:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:34.346 13:41:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:34.346 13:41:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:34.606 13:41:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:21:34.606 13:41:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:21:34.606 13:41:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:34.606 13:41:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:34.606 [2024-11-20 13:41:33.838771] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:34.606 13:41:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:34.606 13:41:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:21:34.606 13:41:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:34.606 13:41:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:34.606 13:41:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:34.606 13:41:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:34.606 13:41:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:34.606 13:41:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:34.606 13:41:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:34.606 13:41:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:34.606 13:41:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:34.606 13:41:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:34.606 13:41:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:34.606 13:41:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:34.607 13:41:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:34.607 13:41:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:34.607 13:41:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:34.607 "name": "Existed_Raid", 00:21:34.607 "uuid": "107f9780-0ff9-4c5b-825f-2e590390f014", 00:21:34.607 "strip_size_kb": 64, 00:21:34.607 "state": "configuring", 00:21:34.607 "raid_level": "raid5f", 00:21:34.607 "superblock": true, 00:21:34.607 "num_base_bdevs": 4, 00:21:34.607 "num_base_bdevs_discovered": 3, 00:21:34.607 "num_base_bdevs_operational": 4, 00:21:34.607 "base_bdevs_list": [ 00:21:34.607 { 00:21:34.607 "name": "BaseBdev1", 00:21:34.607 "uuid": "84ed169c-c725-4de2-ae2b-f62541e7487b", 00:21:34.607 "is_configured": true, 00:21:34.607 "data_offset": 2048, 00:21:34.607 "data_size": 63488 00:21:34.607 }, 00:21:34.607 { 00:21:34.607 "name": null, 00:21:34.607 "uuid": "279f81fb-37c9-45a0-bdaa-58edb2a6928b", 00:21:34.607 "is_configured": false, 00:21:34.607 "data_offset": 0, 00:21:34.607 "data_size": 63488 00:21:34.607 }, 00:21:34.607 { 00:21:34.607 "name": "BaseBdev3", 00:21:34.607 "uuid": "af188a15-760d-4911-b896-87fd5caa4586", 00:21:34.607 "is_configured": true, 00:21:34.607 "data_offset": 2048, 00:21:34.607 "data_size": 63488 00:21:34.607 }, 00:21:34.607 { 00:21:34.607 "name": "BaseBdev4", 00:21:34.607 "uuid": "8f71a23b-5db0-41c4-85a2-b180cc5c936a", 00:21:34.607 "is_configured": true, 00:21:34.607 "data_offset": 2048, 00:21:34.607 "data_size": 63488 00:21:34.607 } 00:21:34.607 ] 00:21:34.607 }' 00:21:34.607 13:41:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:34.607 13:41:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:35.175 13:41:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:35.175 13:41:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:35.175 13:41:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:35.175 13:41:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:21:35.175 13:41:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:35.175 13:41:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:21:35.175 13:41:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:21:35.175 13:41:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:35.175 13:41:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:35.175 [2024-11-20 13:41:34.406489] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:35.175 13:41:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:35.175 13:41:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:21:35.175 13:41:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:35.175 13:41:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:35.175 13:41:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:35.175 13:41:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:35.175 13:41:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:35.175 13:41:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:35.175 13:41:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:35.175 13:41:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:35.175 13:41:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:35.175 13:41:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:35.175 13:41:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:35.175 13:41:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:35.175 13:41:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:35.175 13:41:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:35.175 13:41:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:35.175 "name": "Existed_Raid", 00:21:35.175 "uuid": "107f9780-0ff9-4c5b-825f-2e590390f014", 00:21:35.175 "strip_size_kb": 64, 00:21:35.175 "state": "configuring", 00:21:35.175 "raid_level": "raid5f", 00:21:35.175 "superblock": true, 00:21:35.175 "num_base_bdevs": 4, 00:21:35.175 "num_base_bdevs_discovered": 2, 00:21:35.175 "num_base_bdevs_operational": 4, 00:21:35.175 "base_bdevs_list": [ 00:21:35.175 { 00:21:35.175 "name": null, 00:21:35.175 "uuid": "84ed169c-c725-4de2-ae2b-f62541e7487b", 00:21:35.175 "is_configured": false, 00:21:35.175 "data_offset": 0, 00:21:35.175 "data_size": 63488 00:21:35.175 }, 00:21:35.175 { 00:21:35.175 "name": null, 00:21:35.175 "uuid": "279f81fb-37c9-45a0-bdaa-58edb2a6928b", 00:21:35.175 "is_configured": false, 00:21:35.175 "data_offset": 0, 00:21:35.175 "data_size": 63488 00:21:35.175 }, 00:21:35.175 { 00:21:35.175 "name": "BaseBdev3", 00:21:35.175 "uuid": "af188a15-760d-4911-b896-87fd5caa4586", 00:21:35.175 "is_configured": true, 00:21:35.175 "data_offset": 2048, 00:21:35.175 "data_size": 63488 00:21:35.175 }, 00:21:35.175 { 00:21:35.175 "name": "BaseBdev4", 00:21:35.175 "uuid": "8f71a23b-5db0-41c4-85a2-b180cc5c936a", 00:21:35.175 "is_configured": true, 00:21:35.175 "data_offset": 2048, 00:21:35.175 "data_size": 63488 00:21:35.175 } 00:21:35.175 ] 00:21:35.175 }' 00:21:35.175 13:41:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:35.175 13:41:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:35.745 13:41:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:35.745 13:41:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:21:35.745 13:41:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:35.745 13:41:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:35.745 13:41:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:35.745 13:41:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:21:35.745 13:41:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:21:35.745 13:41:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:35.745 13:41:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:35.745 [2024-11-20 13:41:35.018509] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:35.745 13:41:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:35.745 13:41:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:21:35.745 13:41:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:35.745 13:41:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:35.745 13:41:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:35.745 13:41:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:35.745 13:41:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:35.745 13:41:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:35.745 13:41:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:35.745 13:41:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:35.745 13:41:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:35.745 13:41:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:35.745 13:41:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:35.745 13:41:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:35.745 13:41:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:35.745 13:41:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:35.745 13:41:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:35.745 "name": "Existed_Raid", 00:21:35.745 "uuid": "107f9780-0ff9-4c5b-825f-2e590390f014", 00:21:35.745 "strip_size_kb": 64, 00:21:35.745 "state": "configuring", 00:21:35.745 "raid_level": "raid5f", 00:21:35.745 "superblock": true, 00:21:35.745 "num_base_bdevs": 4, 00:21:35.745 "num_base_bdevs_discovered": 3, 00:21:35.745 "num_base_bdevs_operational": 4, 00:21:35.745 "base_bdevs_list": [ 00:21:35.745 { 00:21:35.745 "name": null, 00:21:35.745 "uuid": "84ed169c-c725-4de2-ae2b-f62541e7487b", 00:21:35.745 "is_configured": false, 00:21:35.745 "data_offset": 0, 00:21:35.745 "data_size": 63488 00:21:35.745 }, 00:21:35.746 { 00:21:35.746 "name": "BaseBdev2", 00:21:35.746 "uuid": "279f81fb-37c9-45a0-bdaa-58edb2a6928b", 00:21:35.746 "is_configured": true, 00:21:35.746 "data_offset": 2048, 00:21:35.746 "data_size": 63488 00:21:35.746 }, 00:21:35.746 { 00:21:35.746 "name": "BaseBdev3", 00:21:35.746 "uuid": "af188a15-760d-4911-b896-87fd5caa4586", 00:21:35.746 "is_configured": true, 00:21:35.746 "data_offset": 2048, 00:21:35.746 "data_size": 63488 00:21:35.746 }, 00:21:35.746 { 00:21:35.746 "name": "BaseBdev4", 00:21:35.746 "uuid": "8f71a23b-5db0-41c4-85a2-b180cc5c936a", 00:21:35.746 "is_configured": true, 00:21:35.746 "data_offset": 2048, 00:21:35.746 "data_size": 63488 00:21:35.746 } 00:21:35.746 ] 00:21:35.746 }' 00:21:35.746 13:41:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:35.746 13:41:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:36.315 13:41:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:36.315 13:41:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:21:36.315 13:41:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.315 13:41:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:36.315 13:41:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.315 13:41:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:21:36.315 13:41:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:21:36.315 13:41:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:36.315 13:41:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.315 13:41:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:36.315 13:41:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.315 13:41:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 84ed169c-c725-4de2-ae2b-f62541e7487b 00:21:36.315 13:41:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.315 13:41:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:36.315 [2024-11-20 13:41:35.644611] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:21:36.315 [2024-11-20 13:41:35.644871] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:21:36.315 [2024-11-20 13:41:35.644886] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:21:36.315 [2024-11-20 13:41:35.645186] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:21:36.315 NewBaseBdev 00:21:36.315 13:41:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.315 13:41:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:21:36.315 13:41:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:21:36.315 13:41:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:21:36.315 13:41:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:21:36.315 13:41:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:21:36.315 13:41:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:21:36.315 13:41:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:21:36.315 13:41:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.315 13:41:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:36.316 [2024-11-20 13:41:35.653019] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:21:36.316 [2024-11-20 13:41:35.653208] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:21:36.316 [2024-11-20 13:41:35.653495] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:36.316 13:41:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.316 13:41:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:21:36.316 13:41:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.316 13:41:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:36.316 [ 00:21:36.316 { 00:21:36.316 "name": "NewBaseBdev", 00:21:36.316 "aliases": [ 00:21:36.316 "84ed169c-c725-4de2-ae2b-f62541e7487b" 00:21:36.316 ], 00:21:36.316 "product_name": "Malloc disk", 00:21:36.316 "block_size": 512, 00:21:36.316 "num_blocks": 65536, 00:21:36.316 "uuid": "84ed169c-c725-4de2-ae2b-f62541e7487b", 00:21:36.316 "assigned_rate_limits": { 00:21:36.316 "rw_ios_per_sec": 0, 00:21:36.316 "rw_mbytes_per_sec": 0, 00:21:36.316 "r_mbytes_per_sec": 0, 00:21:36.316 "w_mbytes_per_sec": 0 00:21:36.316 }, 00:21:36.316 "claimed": true, 00:21:36.316 "claim_type": "exclusive_write", 00:21:36.316 "zoned": false, 00:21:36.316 "supported_io_types": { 00:21:36.316 "read": true, 00:21:36.316 "write": true, 00:21:36.316 "unmap": true, 00:21:36.316 "flush": true, 00:21:36.316 "reset": true, 00:21:36.316 "nvme_admin": false, 00:21:36.316 "nvme_io": false, 00:21:36.316 "nvme_io_md": false, 00:21:36.316 "write_zeroes": true, 00:21:36.316 "zcopy": true, 00:21:36.316 "get_zone_info": false, 00:21:36.316 "zone_management": false, 00:21:36.316 "zone_append": false, 00:21:36.316 "compare": false, 00:21:36.316 "compare_and_write": false, 00:21:36.316 "abort": true, 00:21:36.316 "seek_hole": false, 00:21:36.316 "seek_data": false, 00:21:36.316 "copy": true, 00:21:36.316 "nvme_iov_md": false 00:21:36.316 }, 00:21:36.316 "memory_domains": [ 00:21:36.316 { 00:21:36.316 "dma_device_id": "system", 00:21:36.316 "dma_device_type": 1 00:21:36.316 }, 00:21:36.316 { 00:21:36.316 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:36.316 "dma_device_type": 2 00:21:36.316 } 00:21:36.316 ], 00:21:36.316 "driver_specific": {} 00:21:36.316 } 00:21:36.316 ] 00:21:36.316 13:41:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.316 13:41:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:21:36.316 13:41:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:21:36.316 13:41:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:36.316 13:41:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:36.316 13:41:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:36.316 13:41:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:36.316 13:41:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:36.316 13:41:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:36.316 13:41:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:36.316 13:41:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:36.316 13:41:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:36.316 13:41:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:36.316 13:41:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:36.316 13:41:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.316 13:41:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:36.316 13:41:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.316 13:41:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:36.316 "name": "Existed_Raid", 00:21:36.316 "uuid": "107f9780-0ff9-4c5b-825f-2e590390f014", 00:21:36.316 "strip_size_kb": 64, 00:21:36.316 "state": "online", 00:21:36.316 "raid_level": "raid5f", 00:21:36.316 "superblock": true, 00:21:36.316 "num_base_bdevs": 4, 00:21:36.316 "num_base_bdevs_discovered": 4, 00:21:36.316 "num_base_bdevs_operational": 4, 00:21:36.316 "base_bdevs_list": [ 00:21:36.316 { 00:21:36.316 "name": "NewBaseBdev", 00:21:36.316 "uuid": "84ed169c-c725-4de2-ae2b-f62541e7487b", 00:21:36.316 "is_configured": true, 00:21:36.316 "data_offset": 2048, 00:21:36.316 "data_size": 63488 00:21:36.316 }, 00:21:36.316 { 00:21:36.316 "name": "BaseBdev2", 00:21:36.316 "uuid": "279f81fb-37c9-45a0-bdaa-58edb2a6928b", 00:21:36.316 "is_configured": true, 00:21:36.316 "data_offset": 2048, 00:21:36.316 "data_size": 63488 00:21:36.316 }, 00:21:36.316 { 00:21:36.316 "name": "BaseBdev3", 00:21:36.316 "uuid": "af188a15-760d-4911-b896-87fd5caa4586", 00:21:36.316 "is_configured": true, 00:21:36.316 "data_offset": 2048, 00:21:36.316 "data_size": 63488 00:21:36.316 }, 00:21:36.316 { 00:21:36.316 "name": "BaseBdev4", 00:21:36.316 "uuid": "8f71a23b-5db0-41c4-85a2-b180cc5c936a", 00:21:36.316 "is_configured": true, 00:21:36.316 "data_offset": 2048, 00:21:36.316 "data_size": 63488 00:21:36.316 } 00:21:36.316 ] 00:21:36.316 }' 00:21:36.316 13:41:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:36.316 13:41:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:36.885 13:41:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:21:36.885 13:41:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:21:36.885 13:41:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:21:36.885 13:41:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:21:36.885 13:41:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:21:36.885 13:41:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:21:36.885 13:41:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:21:36.886 13:41:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.886 13:41:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:36.886 13:41:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:21:36.886 [2024-11-20 13:41:36.154449] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:36.886 13:41:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.886 13:41:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:36.886 "name": "Existed_Raid", 00:21:36.886 "aliases": [ 00:21:36.886 "107f9780-0ff9-4c5b-825f-2e590390f014" 00:21:36.886 ], 00:21:36.886 "product_name": "Raid Volume", 00:21:36.886 "block_size": 512, 00:21:36.886 "num_blocks": 190464, 00:21:36.886 "uuid": "107f9780-0ff9-4c5b-825f-2e590390f014", 00:21:36.886 "assigned_rate_limits": { 00:21:36.886 "rw_ios_per_sec": 0, 00:21:36.886 "rw_mbytes_per_sec": 0, 00:21:36.886 "r_mbytes_per_sec": 0, 00:21:36.886 "w_mbytes_per_sec": 0 00:21:36.886 }, 00:21:36.886 "claimed": false, 00:21:36.886 "zoned": false, 00:21:36.886 "supported_io_types": { 00:21:36.886 "read": true, 00:21:36.886 "write": true, 00:21:36.886 "unmap": false, 00:21:36.886 "flush": false, 00:21:36.886 "reset": true, 00:21:36.886 "nvme_admin": false, 00:21:36.886 "nvme_io": false, 00:21:36.886 "nvme_io_md": false, 00:21:36.886 "write_zeroes": true, 00:21:36.886 "zcopy": false, 00:21:36.886 "get_zone_info": false, 00:21:36.886 "zone_management": false, 00:21:36.886 "zone_append": false, 00:21:36.886 "compare": false, 00:21:36.886 "compare_and_write": false, 00:21:36.886 "abort": false, 00:21:36.886 "seek_hole": false, 00:21:36.886 "seek_data": false, 00:21:36.886 "copy": false, 00:21:36.886 "nvme_iov_md": false 00:21:36.886 }, 00:21:36.886 "driver_specific": { 00:21:36.886 "raid": { 00:21:36.886 "uuid": "107f9780-0ff9-4c5b-825f-2e590390f014", 00:21:36.886 "strip_size_kb": 64, 00:21:36.886 "state": "online", 00:21:36.886 "raid_level": "raid5f", 00:21:36.886 "superblock": true, 00:21:36.886 "num_base_bdevs": 4, 00:21:36.886 "num_base_bdevs_discovered": 4, 00:21:36.886 "num_base_bdevs_operational": 4, 00:21:36.886 "base_bdevs_list": [ 00:21:36.886 { 00:21:36.886 "name": "NewBaseBdev", 00:21:36.886 "uuid": "84ed169c-c725-4de2-ae2b-f62541e7487b", 00:21:36.886 "is_configured": true, 00:21:36.886 "data_offset": 2048, 00:21:36.886 "data_size": 63488 00:21:36.886 }, 00:21:36.886 { 00:21:36.886 "name": "BaseBdev2", 00:21:36.886 "uuid": "279f81fb-37c9-45a0-bdaa-58edb2a6928b", 00:21:36.886 "is_configured": true, 00:21:36.886 "data_offset": 2048, 00:21:36.886 "data_size": 63488 00:21:36.886 }, 00:21:36.886 { 00:21:36.886 "name": "BaseBdev3", 00:21:36.886 "uuid": "af188a15-760d-4911-b896-87fd5caa4586", 00:21:36.886 "is_configured": true, 00:21:36.886 "data_offset": 2048, 00:21:36.886 "data_size": 63488 00:21:36.886 }, 00:21:36.886 { 00:21:36.886 "name": "BaseBdev4", 00:21:36.886 "uuid": "8f71a23b-5db0-41c4-85a2-b180cc5c936a", 00:21:36.886 "is_configured": true, 00:21:36.886 "data_offset": 2048, 00:21:36.886 "data_size": 63488 00:21:36.886 } 00:21:36.886 ] 00:21:36.886 } 00:21:36.886 } 00:21:36.886 }' 00:21:36.886 13:41:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:36.886 13:41:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:21:36.886 BaseBdev2 00:21:36.886 BaseBdev3 00:21:36.886 BaseBdev4' 00:21:36.886 13:41:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:36.886 13:41:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:21:36.886 13:41:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:36.886 13:41:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:36.886 13:41:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:21:36.886 13:41:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.886 13:41:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:36.886 13:41:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.886 13:41:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:36.886 13:41:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:36.886 13:41:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:36.886 13:41:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:21:36.886 13:41:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.886 13:41:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:36.886 13:41:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:36.886 13:41:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.145 13:41:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:37.145 13:41:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:37.145 13:41:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:37.145 13:41:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:21:37.145 13:41:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.145 13:41:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:37.145 13:41:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:37.145 13:41:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.145 13:41:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:37.146 13:41:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:37.146 13:41:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:37.146 13:41:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:37.146 13:41:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:21:37.146 13:41:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.146 13:41:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:37.146 13:41:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.146 13:41:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:37.146 13:41:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:37.146 13:41:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:21:37.146 13:41:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.146 13:41:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:37.146 [2024-11-20 13:41:36.486295] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:37.146 [2024-11-20 13:41:36.486337] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:37.146 [2024-11-20 13:41:36.486423] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:37.146 [2024-11-20 13:41:36.486716] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:37.146 [2024-11-20 13:41:36.486730] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:21:37.146 13:41:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.146 13:41:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 83240 00:21:37.146 13:41:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 83240 ']' 00:21:37.146 13:41:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 83240 00:21:37.146 13:41:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:21:37.146 13:41:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:37.146 13:41:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83240 00:21:37.146 killing process with pid 83240 00:21:37.146 13:41:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:37.146 13:41:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:37.146 13:41:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83240' 00:21:37.146 13:41:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 83240 00:21:37.146 [2024-11-20 13:41:36.543199] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:37.146 13:41:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 83240 00:21:37.713 [2024-11-20 13:41:36.949488] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:38.651 13:41:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:21:38.651 00:21:38.651 real 0m12.443s 00:21:38.651 user 0m19.776s 00:21:38.651 sys 0m2.633s 00:21:38.651 13:41:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:38.651 ************************************ 00:21:38.651 END TEST raid5f_state_function_test_sb 00:21:38.651 ************************************ 00:21:38.651 13:41:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:38.930 13:41:38 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 4 00:21:38.930 13:41:38 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:21:38.930 13:41:38 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:38.930 13:41:38 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:38.930 ************************************ 00:21:38.930 START TEST raid5f_superblock_test 00:21:38.930 ************************************ 00:21:38.930 13:41:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid5f 4 00:21:38.930 13:41:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 00:21:38.930 13:41:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:21:38.930 13:41:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:21:38.930 13:41:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:21:38.930 13:41:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:21:38.930 13:41:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:21:38.930 13:41:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:21:38.930 13:41:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:21:38.930 13:41:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:21:38.930 13:41:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:21:38.930 13:41:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:21:38.930 13:41:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:21:38.930 13:41:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:21:38.930 13:41:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 00:21:38.930 13:41:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:21:38.930 13:41:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:21:38.930 13:41:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=83915 00:21:38.930 13:41:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:21:38.930 13:41:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 83915 00:21:38.930 13:41:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 83915 ']' 00:21:38.930 13:41:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:38.930 13:41:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:38.930 13:41:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:38.930 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:38.930 13:41:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:38.930 13:41:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:38.930 [2024-11-20 13:41:38.303777] Starting SPDK v25.01-pre git sha1 82b85d9ca / DPDK 24.03.0 initialization... 00:21:38.930 [2024-11-20 13:41:38.304136] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83915 ] 00:21:39.189 [2024-11-20 13:41:38.486920] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:39.189 [2024-11-20 13:41:38.612882] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:39.448 [2024-11-20 13:41:38.840525] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:39.448 [2024-11-20 13:41:38.840595] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:40.015 13:41:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:40.015 13:41:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:21:40.015 13:41:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:21:40.015 13:41:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:21:40.015 13:41:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:21:40.015 13:41:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:21:40.015 13:41:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:21:40.015 13:41:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:21:40.015 13:41:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:21:40.015 13:41:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:21:40.015 13:41:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:21:40.015 13:41:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.015 13:41:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:40.015 malloc1 00:21:40.015 13:41:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.015 13:41:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:21:40.015 13:41:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.015 13:41:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:40.015 [2024-11-20 13:41:39.255708] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:21:40.015 [2024-11-20 13:41:39.255930] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:40.015 [2024-11-20 13:41:39.256001] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:21:40.015 [2024-11-20 13:41:39.256108] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:40.015 [2024-11-20 13:41:39.258796] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:40.015 [2024-11-20 13:41:39.258841] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:21:40.015 pt1 00:21:40.015 13:41:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.015 13:41:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:21:40.015 13:41:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:21:40.015 13:41:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:21:40.015 13:41:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:21:40.015 13:41:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:21:40.015 13:41:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:21:40.015 13:41:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:21:40.015 13:41:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:21:40.015 13:41:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:21:40.015 13:41:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.015 13:41:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:40.015 malloc2 00:21:40.015 13:41:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.015 13:41:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:21:40.015 13:41:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.015 13:41:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:40.015 [2024-11-20 13:41:39.314588] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:21:40.015 [2024-11-20 13:41:39.314814] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:40.015 [2024-11-20 13:41:39.314887] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:21:40.015 [2024-11-20 13:41:39.314983] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:40.015 [2024-11-20 13:41:39.317641] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:40.015 [2024-11-20 13:41:39.317786] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:21:40.015 pt2 00:21:40.016 13:41:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.016 13:41:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:21:40.016 13:41:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:21:40.016 13:41:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:21:40.016 13:41:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:21:40.016 13:41:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:21:40.016 13:41:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:21:40.016 13:41:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:21:40.016 13:41:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:21:40.016 13:41:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:21:40.016 13:41:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.016 13:41:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:40.016 malloc3 00:21:40.016 13:41:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.016 13:41:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:21:40.016 13:41:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.016 13:41:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:40.016 [2024-11-20 13:41:39.390395] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:21:40.016 [2024-11-20 13:41:39.390464] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:40.016 [2024-11-20 13:41:39.390494] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:21:40.016 [2024-11-20 13:41:39.390510] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:40.016 [2024-11-20 13:41:39.393290] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:40.016 [2024-11-20 13:41:39.393333] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:21:40.016 pt3 00:21:40.016 13:41:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.016 13:41:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:21:40.016 13:41:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:21:40.016 13:41:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:21:40.016 13:41:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:21:40.016 13:41:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:21:40.016 13:41:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:21:40.016 13:41:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:21:40.016 13:41:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:21:40.016 13:41:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:21:40.016 13:41:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.016 13:41:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:40.016 malloc4 00:21:40.016 13:41:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.016 13:41:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:21:40.016 13:41:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.016 13:41:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:40.016 [2024-11-20 13:41:39.449246] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:21:40.016 [2024-11-20 13:41:39.449444] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:40.016 [2024-11-20 13:41:39.449521] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:21:40.016 [2024-11-20 13:41:39.449598] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:40.016 [2024-11-20 13:41:39.452203] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:40.016 [2024-11-20 13:41:39.452344] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:21:40.016 pt4 00:21:40.016 13:41:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.016 13:41:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:21:40.016 13:41:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:21:40.016 13:41:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:21:40.016 13:41:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.016 13:41:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:40.016 [2024-11-20 13:41:39.461306] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:21:40.016 [2024-11-20 13:41:39.463489] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:40.016 [2024-11-20 13:41:39.463584] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:21:40.016 [2024-11-20 13:41:39.463632] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:21:40.016 [2024-11-20 13:41:39.463841] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:21:40.016 [2024-11-20 13:41:39.463859] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:21:40.016 [2024-11-20 13:41:39.464174] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:21:40.016 [2024-11-20 13:41:39.472377] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:21:40.016 [2024-11-20 13:41:39.472510] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:21:40.016 [2024-11-20 13:41:39.472850] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:40.016 13:41:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.016 13:41:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:21:40.016 13:41:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:40.016 13:41:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:40.016 13:41:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:40.016 13:41:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:40.016 13:41:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:40.016 13:41:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:40.016 13:41:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:40.016 13:41:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:40.016 13:41:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:40.016 13:41:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:40.016 13:41:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.016 13:41:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:40.016 13:41:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:40.016 13:41:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.276 13:41:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:40.276 "name": "raid_bdev1", 00:21:40.276 "uuid": "659e6e06-650a-4e58-8838-412377df946f", 00:21:40.276 "strip_size_kb": 64, 00:21:40.276 "state": "online", 00:21:40.276 "raid_level": "raid5f", 00:21:40.276 "superblock": true, 00:21:40.276 "num_base_bdevs": 4, 00:21:40.276 "num_base_bdevs_discovered": 4, 00:21:40.276 "num_base_bdevs_operational": 4, 00:21:40.276 "base_bdevs_list": [ 00:21:40.276 { 00:21:40.276 "name": "pt1", 00:21:40.276 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:40.276 "is_configured": true, 00:21:40.276 "data_offset": 2048, 00:21:40.276 "data_size": 63488 00:21:40.276 }, 00:21:40.276 { 00:21:40.276 "name": "pt2", 00:21:40.276 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:40.276 "is_configured": true, 00:21:40.276 "data_offset": 2048, 00:21:40.276 "data_size": 63488 00:21:40.276 }, 00:21:40.276 { 00:21:40.276 "name": "pt3", 00:21:40.276 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:40.276 "is_configured": true, 00:21:40.276 "data_offset": 2048, 00:21:40.276 "data_size": 63488 00:21:40.276 }, 00:21:40.276 { 00:21:40.276 "name": "pt4", 00:21:40.276 "uuid": "00000000-0000-0000-0000-000000000004", 00:21:40.276 "is_configured": true, 00:21:40.276 "data_offset": 2048, 00:21:40.276 "data_size": 63488 00:21:40.276 } 00:21:40.276 ] 00:21:40.276 }' 00:21:40.276 13:41:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:40.276 13:41:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:40.535 13:41:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:21:40.535 13:41:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:21:40.535 13:41:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:21:40.536 13:41:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:21:40.536 13:41:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:21:40.536 13:41:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:21:40.536 13:41:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:21:40.536 13:41:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:21:40.536 13:41:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.536 13:41:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:40.536 [2024-11-20 13:41:39.901697] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:40.536 13:41:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.536 13:41:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:40.536 "name": "raid_bdev1", 00:21:40.536 "aliases": [ 00:21:40.536 "659e6e06-650a-4e58-8838-412377df946f" 00:21:40.536 ], 00:21:40.536 "product_name": "Raid Volume", 00:21:40.536 "block_size": 512, 00:21:40.536 "num_blocks": 190464, 00:21:40.536 "uuid": "659e6e06-650a-4e58-8838-412377df946f", 00:21:40.536 "assigned_rate_limits": { 00:21:40.536 "rw_ios_per_sec": 0, 00:21:40.536 "rw_mbytes_per_sec": 0, 00:21:40.536 "r_mbytes_per_sec": 0, 00:21:40.536 "w_mbytes_per_sec": 0 00:21:40.536 }, 00:21:40.536 "claimed": false, 00:21:40.536 "zoned": false, 00:21:40.536 "supported_io_types": { 00:21:40.536 "read": true, 00:21:40.536 "write": true, 00:21:40.536 "unmap": false, 00:21:40.536 "flush": false, 00:21:40.536 "reset": true, 00:21:40.536 "nvme_admin": false, 00:21:40.536 "nvme_io": false, 00:21:40.536 "nvme_io_md": false, 00:21:40.536 "write_zeroes": true, 00:21:40.536 "zcopy": false, 00:21:40.536 "get_zone_info": false, 00:21:40.536 "zone_management": false, 00:21:40.536 "zone_append": false, 00:21:40.536 "compare": false, 00:21:40.536 "compare_and_write": false, 00:21:40.536 "abort": false, 00:21:40.536 "seek_hole": false, 00:21:40.536 "seek_data": false, 00:21:40.536 "copy": false, 00:21:40.536 "nvme_iov_md": false 00:21:40.536 }, 00:21:40.536 "driver_specific": { 00:21:40.536 "raid": { 00:21:40.536 "uuid": "659e6e06-650a-4e58-8838-412377df946f", 00:21:40.536 "strip_size_kb": 64, 00:21:40.536 "state": "online", 00:21:40.536 "raid_level": "raid5f", 00:21:40.536 "superblock": true, 00:21:40.536 "num_base_bdevs": 4, 00:21:40.536 "num_base_bdevs_discovered": 4, 00:21:40.536 "num_base_bdevs_operational": 4, 00:21:40.536 "base_bdevs_list": [ 00:21:40.536 { 00:21:40.536 "name": "pt1", 00:21:40.536 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:40.536 "is_configured": true, 00:21:40.536 "data_offset": 2048, 00:21:40.536 "data_size": 63488 00:21:40.536 }, 00:21:40.536 { 00:21:40.536 "name": "pt2", 00:21:40.536 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:40.536 "is_configured": true, 00:21:40.536 "data_offset": 2048, 00:21:40.536 "data_size": 63488 00:21:40.536 }, 00:21:40.536 { 00:21:40.536 "name": "pt3", 00:21:40.536 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:40.536 "is_configured": true, 00:21:40.536 "data_offset": 2048, 00:21:40.536 "data_size": 63488 00:21:40.536 }, 00:21:40.536 { 00:21:40.536 "name": "pt4", 00:21:40.536 "uuid": "00000000-0000-0000-0000-000000000004", 00:21:40.536 "is_configured": true, 00:21:40.536 "data_offset": 2048, 00:21:40.536 "data_size": 63488 00:21:40.536 } 00:21:40.536 ] 00:21:40.536 } 00:21:40.536 } 00:21:40.536 }' 00:21:40.536 13:41:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:40.536 13:41:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:21:40.536 pt2 00:21:40.536 pt3 00:21:40.536 pt4' 00:21:40.536 13:41:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:40.536 13:41:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:21:40.536 13:41:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:40.536 13:41:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:21:40.536 13:41:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:40.536 13:41:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.536 13:41:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:40.795 13:41:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.795 13:41:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:40.795 13:41:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:40.795 13:41:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:40.795 13:41:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:40.795 13:41:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:21:40.795 13:41:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.795 13:41:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:40.795 13:41:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.795 13:41:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:40.795 13:41:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:40.795 13:41:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:40.796 13:41:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:21:40.796 13:41:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:40.796 13:41:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.796 13:41:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:40.796 13:41:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.796 13:41:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:40.796 13:41:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:40.796 13:41:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:40.796 13:41:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:21:40.796 13:41:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:40.796 13:41:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.796 13:41:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:40.796 13:41:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.796 13:41:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:40.796 13:41:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:40.796 13:41:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:21:40.796 13:41:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.796 13:41:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:40.796 13:41:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:21:40.796 [2024-11-20 13:41:40.205467] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:40.796 13:41:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.796 13:41:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=659e6e06-650a-4e58-8838-412377df946f 00:21:40.796 13:41:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 659e6e06-650a-4e58-8838-412377df946f ']' 00:21:40.796 13:41:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:21:40.796 13:41:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.796 13:41:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:40.796 [2024-11-20 13:41:40.245121] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:40.796 [2024-11-20 13:41:40.245152] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:40.796 [2024-11-20 13:41:40.245242] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:40.796 [2024-11-20 13:41:40.245331] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:40.796 [2024-11-20 13:41:40.245351] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:21:40.796 13:41:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.796 13:41:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:40.796 13:41:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:21:40.796 13:41:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.796 13:41:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:40.796 13:41:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.056 13:41:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:21:41.056 13:41:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:21:41.056 13:41:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:21:41.056 13:41:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:21:41.056 13:41:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.056 13:41:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:41.056 13:41:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.056 13:41:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:21:41.056 13:41:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:21:41.056 13:41:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.056 13:41:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:41.056 13:41:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.057 13:41:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:21:41.057 13:41:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:21:41.057 13:41:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.057 13:41:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:41.057 13:41:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.057 13:41:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:21:41.057 13:41:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:21:41.057 13:41:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.057 13:41:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:41.057 13:41:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.057 13:41:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:21:41.057 13:41:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:21:41.057 13:41:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.057 13:41:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:41.057 13:41:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.057 13:41:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:21:41.057 13:41:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:21:41.057 13:41:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:21:41.057 13:41:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:21:41.057 13:41:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:21:41.057 13:41:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:41.057 13:41:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:21:41.057 13:41:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:41.057 13:41:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:21:41.057 13:41:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.057 13:41:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:41.057 [2024-11-20 13:41:40.420886] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:21:41.057 [2024-11-20 13:41:40.423176] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:21:41.057 [2024-11-20 13:41:40.423371] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:21:41.057 [2024-11-20 13:41:40.423417] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:21:41.057 [2024-11-20 13:41:40.423472] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:21:41.057 [2024-11-20 13:41:40.423525] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:21:41.057 [2024-11-20 13:41:40.423547] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:21:41.057 [2024-11-20 13:41:40.423569] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:21:41.057 [2024-11-20 13:41:40.423585] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:41.057 [2024-11-20 13:41:40.423597] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:21:41.057 request: 00:21:41.057 { 00:21:41.057 "name": "raid_bdev1", 00:21:41.057 "raid_level": "raid5f", 00:21:41.057 "base_bdevs": [ 00:21:41.057 "malloc1", 00:21:41.057 "malloc2", 00:21:41.057 "malloc3", 00:21:41.057 "malloc4" 00:21:41.057 ], 00:21:41.057 "strip_size_kb": 64, 00:21:41.057 "superblock": false, 00:21:41.057 "method": "bdev_raid_create", 00:21:41.057 "req_id": 1 00:21:41.057 } 00:21:41.057 Got JSON-RPC error response 00:21:41.057 response: 00:21:41.057 { 00:21:41.057 "code": -17, 00:21:41.057 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:21:41.057 } 00:21:41.057 13:41:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:21:41.057 13:41:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:21:41.057 13:41:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:41.057 13:41:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:41.057 13:41:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:41.057 13:41:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:41.057 13:41:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:21:41.057 13:41:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.057 13:41:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:41.057 13:41:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.057 13:41:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:21:41.057 13:41:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:21:41.057 13:41:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:21:41.057 13:41:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.057 13:41:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:41.057 [2024-11-20 13:41:40.488752] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:21:41.057 [2024-11-20 13:41:40.488830] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:41.057 [2024-11-20 13:41:40.488851] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:21:41.057 [2024-11-20 13:41:40.488867] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:41.057 [2024-11-20 13:41:40.491506] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:41.057 [2024-11-20 13:41:40.491558] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:21:41.057 [2024-11-20 13:41:40.491651] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:21:41.057 [2024-11-20 13:41:40.491718] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:21:41.057 pt1 00:21:41.057 13:41:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.057 13:41:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:21:41.057 13:41:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:41.057 13:41:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:41.057 13:41:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:41.057 13:41:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:41.057 13:41:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:41.057 13:41:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:41.057 13:41:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:41.057 13:41:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:41.057 13:41:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:41.058 13:41:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:41.058 13:41:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:41.058 13:41:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.058 13:41:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:41.058 13:41:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.317 13:41:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:41.317 "name": "raid_bdev1", 00:21:41.317 "uuid": "659e6e06-650a-4e58-8838-412377df946f", 00:21:41.317 "strip_size_kb": 64, 00:21:41.317 "state": "configuring", 00:21:41.317 "raid_level": "raid5f", 00:21:41.317 "superblock": true, 00:21:41.317 "num_base_bdevs": 4, 00:21:41.317 "num_base_bdevs_discovered": 1, 00:21:41.317 "num_base_bdevs_operational": 4, 00:21:41.317 "base_bdevs_list": [ 00:21:41.317 { 00:21:41.317 "name": "pt1", 00:21:41.317 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:41.317 "is_configured": true, 00:21:41.317 "data_offset": 2048, 00:21:41.317 "data_size": 63488 00:21:41.317 }, 00:21:41.317 { 00:21:41.317 "name": null, 00:21:41.317 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:41.317 "is_configured": false, 00:21:41.317 "data_offset": 2048, 00:21:41.317 "data_size": 63488 00:21:41.317 }, 00:21:41.317 { 00:21:41.317 "name": null, 00:21:41.317 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:41.317 "is_configured": false, 00:21:41.317 "data_offset": 2048, 00:21:41.317 "data_size": 63488 00:21:41.317 }, 00:21:41.317 { 00:21:41.317 "name": null, 00:21:41.317 "uuid": "00000000-0000-0000-0000-000000000004", 00:21:41.317 "is_configured": false, 00:21:41.317 "data_offset": 2048, 00:21:41.317 "data_size": 63488 00:21:41.317 } 00:21:41.317 ] 00:21:41.317 }' 00:21:41.317 13:41:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:41.317 13:41:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:41.577 13:41:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:21:41.577 13:41:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:21:41.577 13:41:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.577 13:41:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:41.577 [2024-11-20 13:41:40.932235] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:21:41.577 [2024-11-20 13:41:40.932481] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:41.577 [2024-11-20 13:41:40.932511] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:21:41.577 [2024-11-20 13:41:40.932526] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:41.577 [2024-11-20 13:41:40.932973] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:41.577 [2024-11-20 13:41:40.932995] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:21:41.577 [2024-11-20 13:41:40.933100] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:21:41.577 [2024-11-20 13:41:40.933128] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:41.577 pt2 00:21:41.577 13:41:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.577 13:41:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:21:41.577 13:41:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.577 13:41:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:41.577 [2024-11-20 13:41:40.944207] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:21:41.577 13:41:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.577 13:41:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:21:41.577 13:41:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:41.577 13:41:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:41.577 13:41:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:41.577 13:41:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:41.577 13:41:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:41.577 13:41:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:41.577 13:41:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:41.577 13:41:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:41.577 13:41:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:41.577 13:41:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:41.577 13:41:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:41.577 13:41:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.577 13:41:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:41.577 13:41:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.577 13:41:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:41.577 "name": "raid_bdev1", 00:21:41.577 "uuid": "659e6e06-650a-4e58-8838-412377df946f", 00:21:41.577 "strip_size_kb": 64, 00:21:41.577 "state": "configuring", 00:21:41.577 "raid_level": "raid5f", 00:21:41.577 "superblock": true, 00:21:41.577 "num_base_bdevs": 4, 00:21:41.577 "num_base_bdevs_discovered": 1, 00:21:41.577 "num_base_bdevs_operational": 4, 00:21:41.577 "base_bdevs_list": [ 00:21:41.577 { 00:21:41.577 "name": "pt1", 00:21:41.577 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:41.577 "is_configured": true, 00:21:41.577 "data_offset": 2048, 00:21:41.577 "data_size": 63488 00:21:41.577 }, 00:21:41.577 { 00:21:41.577 "name": null, 00:21:41.577 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:41.577 "is_configured": false, 00:21:41.577 "data_offset": 0, 00:21:41.577 "data_size": 63488 00:21:41.577 }, 00:21:41.577 { 00:21:41.577 "name": null, 00:21:41.577 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:41.577 "is_configured": false, 00:21:41.577 "data_offset": 2048, 00:21:41.577 "data_size": 63488 00:21:41.577 }, 00:21:41.577 { 00:21:41.577 "name": null, 00:21:41.577 "uuid": "00000000-0000-0000-0000-000000000004", 00:21:41.577 "is_configured": false, 00:21:41.577 "data_offset": 2048, 00:21:41.577 "data_size": 63488 00:21:41.577 } 00:21:41.577 ] 00:21:41.577 }' 00:21:41.577 13:41:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:41.577 13:41:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:42.146 13:41:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:21:42.146 13:41:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:21:42.146 13:41:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:21:42.146 13:41:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:42.146 13:41:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:42.146 [2024-11-20 13:41:41.419555] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:21:42.146 [2024-11-20 13:41:41.419649] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:42.146 [2024-11-20 13:41:41.419675] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:21:42.146 [2024-11-20 13:41:41.419687] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:42.146 [2024-11-20 13:41:41.420191] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:42.146 [2024-11-20 13:41:41.420228] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:21:42.146 [2024-11-20 13:41:41.420336] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:21:42.146 [2024-11-20 13:41:41.420365] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:42.146 pt2 00:21:42.146 13:41:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:42.146 13:41:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:21:42.146 13:41:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:21:42.147 13:41:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:21:42.147 13:41:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:42.147 13:41:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:42.147 [2024-11-20 13:41:41.431510] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:21:42.147 [2024-11-20 13:41:41.431567] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:42.147 [2024-11-20 13:41:41.431595] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:21:42.147 [2024-11-20 13:41:41.431607] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:42.147 [2024-11-20 13:41:41.432062] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:42.147 [2024-11-20 13:41:41.432081] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:21:42.147 [2024-11-20 13:41:41.432188] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:21:42.147 [2024-11-20 13:41:41.432218] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:21:42.147 pt3 00:21:42.147 13:41:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:42.147 13:41:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:21:42.147 13:41:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:21:42.147 13:41:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:21:42.147 13:41:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:42.147 13:41:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:42.147 [2024-11-20 13:41:41.443470] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:21:42.147 [2024-11-20 13:41:41.443524] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:42.147 [2024-11-20 13:41:41.443546] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:21:42.147 [2024-11-20 13:41:41.443558] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:42.147 [2024-11-20 13:41:41.444028] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:42.147 [2024-11-20 13:41:41.444045] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:21:42.147 [2024-11-20 13:41:41.444145] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:21:42.147 [2024-11-20 13:41:41.444170] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:21:42.147 [2024-11-20 13:41:41.444299] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:21:42.147 [2024-11-20 13:41:41.444309] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:21:42.147 [2024-11-20 13:41:41.444573] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:21:42.147 [2024-11-20 13:41:41.452347] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:21:42.147 [2024-11-20 13:41:41.452374] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:21:42.147 [2024-11-20 13:41:41.452583] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:42.147 pt4 00:21:42.147 13:41:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:42.147 13:41:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:21:42.147 13:41:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:21:42.147 13:41:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:21:42.147 13:41:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:42.147 13:41:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:42.147 13:41:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:42.147 13:41:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:42.147 13:41:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:42.147 13:41:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:42.147 13:41:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:42.147 13:41:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:42.147 13:41:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:42.147 13:41:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:42.147 13:41:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:42.147 13:41:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:42.147 13:41:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:42.147 13:41:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:42.147 13:41:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:42.147 "name": "raid_bdev1", 00:21:42.147 "uuid": "659e6e06-650a-4e58-8838-412377df946f", 00:21:42.147 "strip_size_kb": 64, 00:21:42.147 "state": "online", 00:21:42.147 "raid_level": "raid5f", 00:21:42.147 "superblock": true, 00:21:42.147 "num_base_bdevs": 4, 00:21:42.147 "num_base_bdevs_discovered": 4, 00:21:42.147 "num_base_bdevs_operational": 4, 00:21:42.147 "base_bdevs_list": [ 00:21:42.147 { 00:21:42.147 "name": "pt1", 00:21:42.147 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:42.147 "is_configured": true, 00:21:42.147 "data_offset": 2048, 00:21:42.147 "data_size": 63488 00:21:42.147 }, 00:21:42.147 { 00:21:42.147 "name": "pt2", 00:21:42.147 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:42.147 "is_configured": true, 00:21:42.147 "data_offset": 2048, 00:21:42.147 "data_size": 63488 00:21:42.147 }, 00:21:42.147 { 00:21:42.147 "name": "pt3", 00:21:42.147 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:42.147 "is_configured": true, 00:21:42.147 "data_offset": 2048, 00:21:42.147 "data_size": 63488 00:21:42.147 }, 00:21:42.147 { 00:21:42.147 "name": "pt4", 00:21:42.147 "uuid": "00000000-0000-0000-0000-000000000004", 00:21:42.147 "is_configured": true, 00:21:42.147 "data_offset": 2048, 00:21:42.147 "data_size": 63488 00:21:42.147 } 00:21:42.147 ] 00:21:42.147 }' 00:21:42.147 13:41:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:42.147 13:41:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:42.712 13:41:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:21:42.712 13:41:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:21:42.712 13:41:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:21:42.712 13:41:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:21:42.712 13:41:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:21:42.712 13:41:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:21:42.712 13:41:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:21:42.712 13:41:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:21:42.712 13:41:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:42.713 13:41:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:42.713 [2024-11-20 13:41:41.913456] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:42.713 13:41:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:42.713 13:41:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:42.713 "name": "raid_bdev1", 00:21:42.713 "aliases": [ 00:21:42.713 "659e6e06-650a-4e58-8838-412377df946f" 00:21:42.713 ], 00:21:42.713 "product_name": "Raid Volume", 00:21:42.713 "block_size": 512, 00:21:42.713 "num_blocks": 190464, 00:21:42.713 "uuid": "659e6e06-650a-4e58-8838-412377df946f", 00:21:42.713 "assigned_rate_limits": { 00:21:42.713 "rw_ios_per_sec": 0, 00:21:42.713 "rw_mbytes_per_sec": 0, 00:21:42.713 "r_mbytes_per_sec": 0, 00:21:42.713 "w_mbytes_per_sec": 0 00:21:42.713 }, 00:21:42.713 "claimed": false, 00:21:42.713 "zoned": false, 00:21:42.713 "supported_io_types": { 00:21:42.713 "read": true, 00:21:42.713 "write": true, 00:21:42.713 "unmap": false, 00:21:42.713 "flush": false, 00:21:42.713 "reset": true, 00:21:42.713 "nvme_admin": false, 00:21:42.713 "nvme_io": false, 00:21:42.713 "nvme_io_md": false, 00:21:42.713 "write_zeroes": true, 00:21:42.713 "zcopy": false, 00:21:42.713 "get_zone_info": false, 00:21:42.713 "zone_management": false, 00:21:42.713 "zone_append": false, 00:21:42.713 "compare": false, 00:21:42.713 "compare_and_write": false, 00:21:42.713 "abort": false, 00:21:42.713 "seek_hole": false, 00:21:42.713 "seek_data": false, 00:21:42.713 "copy": false, 00:21:42.713 "nvme_iov_md": false 00:21:42.713 }, 00:21:42.713 "driver_specific": { 00:21:42.713 "raid": { 00:21:42.713 "uuid": "659e6e06-650a-4e58-8838-412377df946f", 00:21:42.713 "strip_size_kb": 64, 00:21:42.713 "state": "online", 00:21:42.713 "raid_level": "raid5f", 00:21:42.713 "superblock": true, 00:21:42.713 "num_base_bdevs": 4, 00:21:42.713 "num_base_bdevs_discovered": 4, 00:21:42.713 "num_base_bdevs_operational": 4, 00:21:42.713 "base_bdevs_list": [ 00:21:42.713 { 00:21:42.713 "name": "pt1", 00:21:42.713 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:42.713 "is_configured": true, 00:21:42.713 "data_offset": 2048, 00:21:42.713 "data_size": 63488 00:21:42.713 }, 00:21:42.713 { 00:21:42.713 "name": "pt2", 00:21:42.713 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:42.713 "is_configured": true, 00:21:42.713 "data_offset": 2048, 00:21:42.713 "data_size": 63488 00:21:42.713 }, 00:21:42.713 { 00:21:42.713 "name": "pt3", 00:21:42.713 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:42.713 "is_configured": true, 00:21:42.713 "data_offset": 2048, 00:21:42.713 "data_size": 63488 00:21:42.713 }, 00:21:42.713 { 00:21:42.713 "name": "pt4", 00:21:42.713 "uuid": "00000000-0000-0000-0000-000000000004", 00:21:42.713 "is_configured": true, 00:21:42.713 "data_offset": 2048, 00:21:42.713 "data_size": 63488 00:21:42.713 } 00:21:42.713 ] 00:21:42.713 } 00:21:42.713 } 00:21:42.713 }' 00:21:42.713 13:41:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:42.713 13:41:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:21:42.713 pt2 00:21:42.713 pt3 00:21:42.713 pt4' 00:21:42.713 13:41:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:42.713 13:41:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:21:42.713 13:41:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:42.713 13:41:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:21:42.713 13:41:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:42.713 13:41:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:42.713 13:41:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:42.713 13:41:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:42.713 13:41:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:42.713 13:41:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:42.713 13:41:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:42.713 13:41:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:21:42.713 13:41:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:42.713 13:41:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:42.713 13:41:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:42.713 13:41:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:42.713 13:41:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:42.713 13:41:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:42.713 13:41:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:42.713 13:41:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:21:42.713 13:41:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:42.713 13:41:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:42.713 13:41:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:42.713 13:41:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.033 13:41:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:43.033 13:41:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:43.033 13:41:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:43.033 13:41:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:43.033 13:41:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:21:43.033 13:41:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.033 13:41:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:43.033 13:41:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.033 13:41:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:43.033 13:41:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:43.033 13:41:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:21:43.033 13:41:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:21:43.033 13:41:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.033 13:41:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:43.033 [2024-11-20 13:41:42.276936] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:43.033 13:41:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.033 13:41:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 659e6e06-650a-4e58-8838-412377df946f '!=' 659e6e06-650a-4e58-8838-412377df946f ']' 00:21:43.033 13:41:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 00:21:43.033 13:41:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:21:43.033 13:41:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:21:43.033 13:41:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:21:43.033 13:41:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.033 13:41:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:43.033 [2024-11-20 13:41:42.320793] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:21:43.033 13:41:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.034 13:41:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:21:43.034 13:41:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:43.034 13:41:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:43.034 13:41:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:43.034 13:41:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:43.034 13:41:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:43.034 13:41:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:43.034 13:41:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:43.034 13:41:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:43.034 13:41:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:43.034 13:41:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:43.034 13:41:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.034 13:41:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:43.034 13:41:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:43.034 13:41:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.034 13:41:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:43.034 "name": "raid_bdev1", 00:21:43.034 "uuid": "659e6e06-650a-4e58-8838-412377df946f", 00:21:43.034 "strip_size_kb": 64, 00:21:43.034 "state": "online", 00:21:43.034 "raid_level": "raid5f", 00:21:43.034 "superblock": true, 00:21:43.034 "num_base_bdevs": 4, 00:21:43.034 "num_base_bdevs_discovered": 3, 00:21:43.034 "num_base_bdevs_operational": 3, 00:21:43.034 "base_bdevs_list": [ 00:21:43.034 { 00:21:43.034 "name": null, 00:21:43.034 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:43.034 "is_configured": false, 00:21:43.034 "data_offset": 0, 00:21:43.034 "data_size": 63488 00:21:43.034 }, 00:21:43.034 { 00:21:43.034 "name": "pt2", 00:21:43.034 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:43.034 "is_configured": true, 00:21:43.034 "data_offset": 2048, 00:21:43.034 "data_size": 63488 00:21:43.034 }, 00:21:43.034 { 00:21:43.034 "name": "pt3", 00:21:43.034 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:43.034 "is_configured": true, 00:21:43.034 "data_offset": 2048, 00:21:43.034 "data_size": 63488 00:21:43.034 }, 00:21:43.034 { 00:21:43.034 "name": "pt4", 00:21:43.034 "uuid": "00000000-0000-0000-0000-000000000004", 00:21:43.034 "is_configured": true, 00:21:43.034 "data_offset": 2048, 00:21:43.034 "data_size": 63488 00:21:43.034 } 00:21:43.034 ] 00:21:43.034 }' 00:21:43.034 13:41:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:43.034 13:41:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:43.308 13:41:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:21:43.308 13:41:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.308 13:41:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:43.308 [2024-11-20 13:41:42.788045] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:43.308 [2024-11-20 13:41:42.788092] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:43.308 [2024-11-20 13:41:42.788179] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:43.308 [2024-11-20 13:41:42.788261] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:43.308 [2024-11-20 13:41:42.788273] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:21:43.308 13:41:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.569 13:41:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:21:43.569 13:41:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:43.569 13:41:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.569 13:41:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:43.569 13:41:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.569 13:41:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:21:43.569 13:41:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:21:43.569 13:41:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:21:43.569 13:41:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:21:43.569 13:41:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:21:43.569 13:41:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.569 13:41:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:43.569 13:41:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.569 13:41:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:21:43.569 13:41:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:21:43.569 13:41:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:21:43.569 13:41:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.569 13:41:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:43.569 13:41:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.569 13:41:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:21:43.569 13:41:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:21:43.569 13:41:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 00:21:43.569 13:41:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.569 13:41:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:43.569 13:41:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.569 13:41:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:21:43.569 13:41:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:21:43.569 13:41:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:21:43.569 13:41:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:21:43.569 13:41:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:21:43.569 13:41:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.569 13:41:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:43.569 [2024-11-20 13:41:42.883911] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:21:43.569 [2024-11-20 13:41:42.884105] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:43.569 [2024-11-20 13:41:42.884138] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:21:43.569 [2024-11-20 13:41:42.884151] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:43.569 [2024-11-20 13:41:42.886625] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:43.570 [2024-11-20 13:41:42.886660] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:21:43.570 [2024-11-20 13:41:42.886751] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:21:43.570 [2024-11-20 13:41:42.886795] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:43.570 pt2 00:21:43.570 13:41:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.570 13:41:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:21:43.570 13:41:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:43.570 13:41:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:43.570 13:41:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:43.570 13:41:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:43.570 13:41:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:43.570 13:41:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:43.570 13:41:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:43.570 13:41:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:43.570 13:41:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:43.570 13:41:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:43.570 13:41:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:43.570 13:41:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.570 13:41:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:43.570 13:41:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.570 13:41:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:43.570 "name": "raid_bdev1", 00:21:43.570 "uuid": "659e6e06-650a-4e58-8838-412377df946f", 00:21:43.570 "strip_size_kb": 64, 00:21:43.570 "state": "configuring", 00:21:43.570 "raid_level": "raid5f", 00:21:43.570 "superblock": true, 00:21:43.570 "num_base_bdevs": 4, 00:21:43.570 "num_base_bdevs_discovered": 1, 00:21:43.570 "num_base_bdevs_operational": 3, 00:21:43.570 "base_bdevs_list": [ 00:21:43.570 { 00:21:43.570 "name": null, 00:21:43.570 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:43.570 "is_configured": false, 00:21:43.570 "data_offset": 2048, 00:21:43.570 "data_size": 63488 00:21:43.570 }, 00:21:43.570 { 00:21:43.570 "name": "pt2", 00:21:43.570 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:43.570 "is_configured": true, 00:21:43.570 "data_offset": 2048, 00:21:43.570 "data_size": 63488 00:21:43.570 }, 00:21:43.570 { 00:21:43.570 "name": null, 00:21:43.570 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:43.570 "is_configured": false, 00:21:43.570 "data_offset": 2048, 00:21:43.570 "data_size": 63488 00:21:43.570 }, 00:21:43.570 { 00:21:43.570 "name": null, 00:21:43.570 "uuid": "00000000-0000-0000-0000-000000000004", 00:21:43.570 "is_configured": false, 00:21:43.570 "data_offset": 2048, 00:21:43.570 "data_size": 63488 00:21:43.570 } 00:21:43.570 ] 00:21:43.570 }' 00:21:43.570 13:41:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:43.570 13:41:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:43.829 13:41:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:21:43.829 13:41:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:21:43.830 13:41:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:21:43.830 13:41:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.830 13:41:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:44.087 [2024-11-20 13:41:43.315304] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:21:44.087 [2024-11-20 13:41:43.315588] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:44.087 [2024-11-20 13:41:43.315624] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:21:44.087 [2024-11-20 13:41:43.315636] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:44.087 [2024-11-20 13:41:43.316090] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:44.087 [2024-11-20 13:41:43.316117] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:21:44.087 [2024-11-20 13:41:43.316214] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:21:44.087 [2024-11-20 13:41:43.316236] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:21:44.087 pt3 00:21:44.087 13:41:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:44.087 13:41:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:21:44.087 13:41:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:44.087 13:41:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:44.087 13:41:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:44.087 13:41:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:44.087 13:41:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:44.087 13:41:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:44.087 13:41:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:44.087 13:41:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:44.087 13:41:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:44.088 13:41:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:44.088 13:41:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:44.088 13:41:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:44.088 13:41:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:44.088 13:41:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:44.088 13:41:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:44.088 "name": "raid_bdev1", 00:21:44.088 "uuid": "659e6e06-650a-4e58-8838-412377df946f", 00:21:44.088 "strip_size_kb": 64, 00:21:44.088 "state": "configuring", 00:21:44.088 "raid_level": "raid5f", 00:21:44.088 "superblock": true, 00:21:44.088 "num_base_bdevs": 4, 00:21:44.088 "num_base_bdevs_discovered": 2, 00:21:44.088 "num_base_bdevs_operational": 3, 00:21:44.088 "base_bdevs_list": [ 00:21:44.088 { 00:21:44.088 "name": null, 00:21:44.088 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:44.088 "is_configured": false, 00:21:44.088 "data_offset": 2048, 00:21:44.088 "data_size": 63488 00:21:44.088 }, 00:21:44.088 { 00:21:44.088 "name": "pt2", 00:21:44.088 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:44.088 "is_configured": true, 00:21:44.088 "data_offset": 2048, 00:21:44.088 "data_size": 63488 00:21:44.088 }, 00:21:44.088 { 00:21:44.088 "name": "pt3", 00:21:44.088 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:44.088 "is_configured": true, 00:21:44.088 "data_offset": 2048, 00:21:44.088 "data_size": 63488 00:21:44.088 }, 00:21:44.088 { 00:21:44.088 "name": null, 00:21:44.088 "uuid": "00000000-0000-0000-0000-000000000004", 00:21:44.088 "is_configured": false, 00:21:44.088 "data_offset": 2048, 00:21:44.088 "data_size": 63488 00:21:44.088 } 00:21:44.088 ] 00:21:44.088 }' 00:21:44.088 13:41:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:44.088 13:41:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:44.346 13:41:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:21:44.346 13:41:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:21:44.346 13:41:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:21:44.346 13:41:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:21:44.346 13:41:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:44.346 13:41:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:44.346 [2024-11-20 13:41:43.783029] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:21:44.346 [2024-11-20 13:41:43.783119] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:44.346 [2024-11-20 13:41:43.783145] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:21:44.346 [2024-11-20 13:41:43.783157] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:44.346 [2024-11-20 13:41:43.783607] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:44.346 [2024-11-20 13:41:43.783630] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:21:44.346 [2024-11-20 13:41:43.783714] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:21:44.346 [2024-11-20 13:41:43.783742] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:21:44.346 [2024-11-20 13:41:43.783863] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:21:44.346 [2024-11-20 13:41:43.783873] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:21:44.346 [2024-11-20 13:41:43.784136] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:21:44.346 [2024-11-20 13:41:43.791144] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:21:44.346 [2024-11-20 13:41:43.791176] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:21:44.346 [2024-11-20 13:41:43.791520] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:44.346 pt4 00:21:44.346 13:41:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:44.346 13:41:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:21:44.346 13:41:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:44.346 13:41:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:44.346 13:41:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:44.346 13:41:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:44.346 13:41:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:44.346 13:41:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:44.346 13:41:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:44.346 13:41:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:44.346 13:41:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:44.346 13:41:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:44.346 13:41:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:44.346 13:41:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:44.346 13:41:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:44.346 13:41:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:44.603 13:41:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:44.603 "name": "raid_bdev1", 00:21:44.603 "uuid": "659e6e06-650a-4e58-8838-412377df946f", 00:21:44.603 "strip_size_kb": 64, 00:21:44.603 "state": "online", 00:21:44.603 "raid_level": "raid5f", 00:21:44.603 "superblock": true, 00:21:44.603 "num_base_bdevs": 4, 00:21:44.603 "num_base_bdevs_discovered": 3, 00:21:44.603 "num_base_bdevs_operational": 3, 00:21:44.603 "base_bdevs_list": [ 00:21:44.603 { 00:21:44.603 "name": null, 00:21:44.603 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:44.603 "is_configured": false, 00:21:44.603 "data_offset": 2048, 00:21:44.603 "data_size": 63488 00:21:44.603 }, 00:21:44.603 { 00:21:44.603 "name": "pt2", 00:21:44.603 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:44.603 "is_configured": true, 00:21:44.603 "data_offset": 2048, 00:21:44.603 "data_size": 63488 00:21:44.603 }, 00:21:44.603 { 00:21:44.603 "name": "pt3", 00:21:44.603 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:44.603 "is_configured": true, 00:21:44.603 "data_offset": 2048, 00:21:44.603 "data_size": 63488 00:21:44.603 }, 00:21:44.603 { 00:21:44.603 "name": "pt4", 00:21:44.603 "uuid": "00000000-0000-0000-0000-000000000004", 00:21:44.603 "is_configured": true, 00:21:44.603 "data_offset": 2048, 00:21:44.603 "data_size": 63488 00:21:44.603 } 00:21:44.603 ] 00:21:44.603 }' 00:21:44.603 13:41:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:44.603 13:41:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:44.861 13:41:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:21:44.861 13:41:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:44.861 13:41:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:44.861 [2024-11-20 13:41:44.196047] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:44.861 [2024-11-20 13:41:44.196088] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:44.861 [2024-11-20 13:41:44.196182] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:44.861 [2024-11-20 13:41:44.196255] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:44.861 [2024-11-20 13:41:44.196270] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:21:44.861 13:41:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:44.861 13:41:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:44.861 13:41:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:44.861 13:41:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:44.861 13:41:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:21:44.861 13:41:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:44.861 13:41:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:21:44.861 13:41:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:21:44.861 13:41:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:21:44.861 13:41:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:21:44.861 13:41:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 00:21:44.861 13:41:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:44.861 13:41:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:44.861 13:41:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:44.861 13:41:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:21:44.861 13:41:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:44.861 13:41:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:44.861 [2024-11-20 13:41:44.275946] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:21:44.861 [2024-11-20 13:41:44.276047] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:44.861 [2024-11-20 13:41:44.276098] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:21:44.861 [2024-11-20 13:41:44.276120] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:44.861 [2024-11-20 13:41:44.279074] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:44.861 [2024-11-20 13:41:44.279123] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:21:44.861 [2024-11-20 13:41:44.279228] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:21:44.861 [2024-11-20 13:41:44.279305] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:21:44.861 [2024-11-20 13:41:44.279458] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:21:44.861 [2024-11-20 13:41:44.279475] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:44.861 [2024-11-20 13:41:44.279492] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:21:44.861 [2024-11-20 13:41:44.279557] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:44.861 [2024-11-20 13:41:44.279662] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:21:44.861 pt1 00:21:44.861 13:41:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:44.861 13:41:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:21:44.861 13:41:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:21:44.861 13:41:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:44.861 13:41:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:44.861 13:41:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:44.861 13:41:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:44.861 13:41:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:44.861 13:41:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:44.861 13:41:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:44.861 13:41:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:44.861 13:41:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:44.861 13:41:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:44.861 13:41:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:44.861 13:41:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:44.861 13:41:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:44.861 13:41:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:44.861 13:41:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:44.861 "name": "raid_bdev1", 00:21:44.861 "uuid": "659e6e06-650a-4e58-8838-412377df946f", 00:21:44.861 "strip_size_kb": 64, 00:21:44.861 "state": "configuring", 00:21:44.861 "raid_level": "raid5f", 00:21:44.861 "superblock": true, 00:21:44.861 "num_base_bdevs": 4, 00:21:44.861 "num_base_bdevs_discovered": 2, 00:21:44.861 "num_base_bdevs_operational": 3, 00:21:44.861 "base_bdevs_list": [ 00:21:44.861 { 00:21:44.861 "name": null, 00:21:44.861 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:44.861 "is_configured": false, 00:21:44.861 "data_offset": 2048, 00:21:44.861 "data_size": 63488 00:21:44.861 }, 00:21:44.861 { 00:21:44.861 "name": "pt2", 00:21:44.861 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:44.861 "is_configured": true, 00:21:44.861 "data_offset": 2048, 00:21:44.861 "data_size": 63488 00:21:44.861 }, 00:21:44.861 { 00:21:44.861 "name": "pt3", 00:21:44.861 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:44.861 "is_configured": true, 00:21:44.861 "data_offset": 2048, 00:21:44.861 "data_size": 63488 00:21:44.861 }, 00:21:44.861 { 00:21:44.861 "name": null, 00:21:44.862 "uuid": "00000000-0000-0000-0000-000000000004", 00:21:44.862 "is_configured": false, 00:21:44.862 "data_offset": 2048, 00:21:44.862 "data_size": 63488 00:21:44.862 } 00:21:44.862 ] 00:21:44.862 }' 00:21:45.119 13:41:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:45.119 13:41:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:45.377 13:41:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:21:45.377 13:41:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:45.377 13:41:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:45.377 13:41:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:21:45.377 13:41:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:45.377 13:41:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:21:45.377 13:41:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:21:45.377 13:41:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:45.377 13:41:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:45.378 [2024-11-20 13:41:44.787278] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:21:45.378 [2024-11-20 13:41:44.787356] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:45.378 [2024-11-20 13:41:44.787385] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:21:45.378 [2024-11-20 13:41:44.787399] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:45.378 [2024-11-20 13:41:44.787911] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:45.378 [2024-11-20 13:41:44.787939] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:21:45.378 [2024-11-20 13:41:44.788037] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:21:45.378 [2024-11-20 13:41:44.788080] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:21:45.378 [2024-11-20 13:41:44.788226] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:21:45.378 [2024-11-20 13:41:44.788295] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:21:45.378 [2024-11-20 13:41:44.788609] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:21:45.378 [2024-11-20 13:41:44.797025] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:21:45.378 [2024-11-20 13:41:44.797058] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:21:45.378 [2024-11-20 13:41:44.797423] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:45.378 pt4 00:21:45.378 13:41:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:45.378 13:41:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:21:45.378 13:41:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:45.378 13:41:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:45.378 13:41:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:45.378 13:41:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:45.378 13:41:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:45.378 13:41:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:45.378 13:41:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:45.378 13:41:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:45.378 13:41:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:45.378 13:41:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:45.378 13:41:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:45.378 13:41:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:45.378 13:41:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:45.378 13:41:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:45.378 13:41:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:45.378 "name": "raid_bdev1", 00:21:45.378 "uuid": "659e6e06-650a-4e58-8838-412377df946f", 00:21:45.378 "strip_size_kb": 64, 00:21:45.378 "state": "online", 00:21:45.378 "raid_level": "raid5f", 00:21:45.378 "superblock": true, 00:21:45.378 "num_base_bdevs": 4, 00:21:45.378 "num_base_bdevs_discovered": 3, 00:21:45.378 "num_base_bdevs_operational": 3, 00:21:45.378 "base_bdevs_list": [ 00:21:45.378 { 00:21:45.378 "name": null, 00:21:45.378 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:45.378 "is_configured": false, 00:21:45.378 "data_offset": 2048, 00:21:45.378 "data_size": 63488 00:21:45.378 }, 00:21:45.378 { 00:21:45.378 "name": "pt2", 00:21:45.378 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:45.378 "is_configured": true, 00:21:45.378 "data_offset": 2048, 00:21:45.378 "data_size": 63488 00:21:45.378 }, 00:21:45.378 { 00:21:45.378 "name": "pt3", 00:21:45.378 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:45.378 "is_configured": true, 00:21:45.378 "data_offset": 2048, 00:21:45.378 "data_size": 63488 00:21:45.378 }, 00:21:45.378 { 00:21:45.378 "name": "pt4", 00:21:45.378 "uuid": "00000000-0000-0000-0000-000000000004", 00:21:45.378 "is_configured": true, 00:21:45.378 "data_offset": 2048, 00:21:45.378 "data_size": 63488 00:21:45.378 } 00:21:45.378 ] 00:21:45.378 }' 00:21:45.378 13:41:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:45.378 13:41:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:45.944 13:41:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:21:45.944 13:41:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:21:45.944 13:41:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:45.944 13:41:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:45.944 13:41:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:45.944 13:41:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:21:45.944 13:41:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:21:45.944 13:41:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:45.944 13:41:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:45.944 13:41:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:21:45.944 [2024-11-20 13:41:45.263055] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:45.944 13:41:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:45.944 13:41:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 659e6e06-650a-4e58-8838-412377df946f '!=' 659e6e06-650a-4e58-8838-412377df946f ']' 00:21:45.944 13:41:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 83915 00:21:45.944 13:41:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 83915 ']' 00:21:45.944 13:41:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@958 -- # kill -0 83915 00:21:45.944 13:41:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # uname 00:21:45.944 13:41:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:45.944 13:41:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83915 00:21:45.944 killing process with pid 83915 00:21:45.944 13:41:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:45.944 13:41:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:45.944 13:41:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83915' 00:21:45.944 13:41:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@973 -- # kill 83915 00:21:45.944 [2024-11-20 13:41:45.358819] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:45.944 [2024-11-20 13:41:45.358930] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:45.944 13:41:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@978 -- # wait 83915 00:21:45.944 [2024-11-20 13:41:45.359014] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:45.944 [2024-11-20 13:41:45.359031] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:21:46.508 [2024-11-20 13:41:45.763878] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:47.442 13:41:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:21:47.442 00:21:47.442 real 0m8.718s 00:21:47.442 user 0m13.605s 00:21:47.442 sys 0m1.874s 00:21:47.442 13:41:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:47.442 13:41:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:47.442 ************************************ 00:21:47.442 END TEST raid5f_superblock_test 00:21:47.442 ************************************ 00:21:47.700 13:41:46 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 00:21:47.700 13:41:46 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 4 false false true 00:21:47.700 13:41:46 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:21:47.700 13:41:46 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:47.700 13:41:46 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:47.700 ************************************ 00:21:47.700 START TEST raid5f_rebuild_test 00:21:47.700 ************************************ 00:21:47.700 13:41:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 4 false false true 00:21:47.700 13:41:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:21:47.700 13:41:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:21:47.700 13:41:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:21:47.700 13:41:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:21:47.700 13:41:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:21:47.700 13:41:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:21:47.700 13:41:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:21:47.700 13:41:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:21:47.700 13:41:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:21:47.700 13:41:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:21:47.700 13:41:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:21:47.701 13:41:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:21:47.701 13:41:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:21:47.701 13:41:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:21:47.701 13:41:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:21:47.701 13:41:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:21:47.701 13:41:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:21:47.701 13:41:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:21:47.701 13:41:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:21:47.701 13:41:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:21:47.701 13:41:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:21:47.701 13:41:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:21:47.701 13:41:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:21:47.701 13:41:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:21:47.701 13:41:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:21:47.701 13:41:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:21:47.701 13:41:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:21:47.701 13:41:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:21:47.701 13:41:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:21:47.701 13:41:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:21:47.701 13:41:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:21:47.701 13:41:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=84405 00:21:47.701 13:41:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:21:47.701 13:41:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 84405 00:21:47.701 13:41:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 84405 ']' 00:21:47.701 13:41:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:47.701 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:47.701 13:41:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:47.701 13:41:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:47.701 13:41:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:47.701 13:41:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:47.701 [2024-11-20 13:41:47.113808] Starting SPDK v25.01-pre git sha1 82b85d9ca / DPDK 24.03.0 initialization... 00:21:47.701 [2024-11-20 13:41:47.114146] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:21:47.701 Zero copy mechanism will not be used. 00:21:47.701 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84405 ] 00:21:47.958 [2024-11-20 13:41:47.294098] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:47.958 [2024-11-20 13:41:47.405715] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:48.216 [2024-11-20 13:41:47.629707] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:48.216 [2024-11-20 13:41:47.630015] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:48.473 13:41:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:48.473 13:41:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:21:48.473 13:41:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:21:48.473 13:41:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:21:48.473 13:41:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:48.473 13:41:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:48.732 BaseBdev1_malloc 00:21:48.732 13:41:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:48.732 13:41:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:21:48.732 13:41:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:48.732 13:41:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:48.732 [2024-11-20 13:41:47.994414] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:21:48.732 [2024-11-20 13:41:47.994626] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:48.732 [2024-11-20 13:41:47.994661] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:21:48.732 [2024-11-20 13:41:47.994677] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:48.732 [2024-11-20 13:41:47.997144] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:48.732 [2024-11-20 13:41:47.997183] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:21:48.732 BaseBdev1 00:21:48.732 13:41:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:48.732 13:41:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:21:48.732 13:41:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:21:48.732 13:41:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:48.732 13:41:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:48.732 BaseBdev2_malloc 00:21:48.732 13:41:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:48.732 13:41:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:21:48.732 13:41:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:48.732 13:41:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:48.732 [2024-11-20 13:41:48.043341] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:21:48.732 [2024-11-20 13:41:48.043418] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:48.732 [2024-11-20 13:41:48.043448] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:21:48.732 [2024-11-20 13:41:48.043463] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:48.732 [2024-11-20 13:41:48.045863] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:48.732 [2024-11-20 13:41:48.046055] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:21:48.732 BaseBdev2 00:21:48.732 13:41:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:48.732 13:41:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:21:48.732 13:41:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:21:48.732 13:41:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:48.732 13:41:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:48.732 BaseBdev3_malloc 00:21:48.732 13:41:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:48.732 13:41:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:21:48.732 13:41:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:48.732 13:41:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:48.732 [2024-11-20 13:41:48.114617] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:21:48.732 [2024-11-20 13:41:48.114679] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:48.733 [2024-11-20 13:41:48.114703] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:21:48.733 [2024-11-20 13:41:48.114717] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:48.733 [2024-11-20 13:41:48.117016] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:48.733 [2024-11-20 13:41:48.117070] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:21:48.733 BaseBdev3 00:21:48.733 13:41:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:48.733 13:41:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:21:48.733 13:41:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:21:48.733 13:41:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:48.733 13:41:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:48.733 BaseBdev4_malloc 00:21:48.733 13:41:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:48.733 13:41:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:21:48.733 13:41:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:48.733 13:41:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:48.733 [2024-11-20 13:41:48.173400] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:21:48.733 [2024-11-20 13:41:48.173467] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:48.733 [2024-11-20 13:41:48.173490] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:21:48.733 [2024-11-20 13:41:48.173505] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:48.733 [2024-11-20 13:41:48.175827] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:48.733 [2024-11-20 13:41:48.175873] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:21:48.733 BaseBdev4 00:21:48.733 13:41:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:48.733 13:41:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:21:48.733 13:41:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:48.733 13:41:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:48.991 spare_malloc 00:21:48.991 13:41:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:48.991 13:41:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:21:48.991 13:41:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:48.991 13:41:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:48.991 spare_delay 00:21:48.991 13:41:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:48.991 13:41:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:21:48.991 13:41:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:48.991 13:41:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:48.991 [2024-11-20 13:41:48.244437] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:21:48.991 [2024-11-20 13:41:48.244504] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:48.991 [2024-11-20 13:41:48.244527] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:21:48.991 [2024-11-20 13:41:48.244541] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:48.991 [2024-11-20 13:41:48.246925] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:48.991 [2024-11-20 13:41:48.246971] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:21:48.991 spare 00:21:48.991 13:41:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:48.991 13:41:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:21:48.991 13:41:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:48.991 13:41:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:48.991 [2024-11-20 13:41:48.256511] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:48.991 [2024-11-20 13:41:48.258664] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:48.991 [2024-11-20 13:41:48.258893] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:48.991 [2024-11-20 13:41:48.258960] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:21:48.991 [2024-11-20 13:41:48.259085] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:21:48.991 [2024-11-20 13:41:48.259102] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:21:48.991 [2024-11-20 13:41:48.259425] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:21:48.991 [2024-11-20 13:41:48.267337] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:21:48.991 [2024-11-20 13:41:48.267490] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:21:48.991 [2024-11-20 13:41:48.267897] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:48.991 13:41:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:48.991 13:41:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:21:48.991 13:41:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:48.991 13:41:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:48.991 13:41:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:48.991 13:41:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:48.991 13:41:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:48.992 13:41:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:48.992 13:41:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:48.992 13:41:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:48.992 13:41:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:48.992 13:41:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:48.992 13:41:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:48.992 13:41:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:48.992 13:41:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:48.992 13:41:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:48.992 13:41:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:48.992 "name": "raid_bdev1", 00:21:48.992 "uuid": "cbe2cc01-36df-4784-861d-6e351a824a8b", 00:21:48.992 "strip_size_kb": 64, 00:21:48.992 "state": "online", 00:21:48.992 "raid_level": "raid5f", 00:21:48.992 "superblock": false, 00:21:48.992 "num_base_bdevs": 4, 00:21:48.992 "num_base_bdevs_discovered": 4, 00:21:48.992 "num_base_bdevs_operational": 4, 00:21:48.992 "base_bdevs_list": [ 00:21:48.992 { 00:21:48.992 "name": "BaseBdev1", 00:21:48.992 "uuid": "43e8306e-b41e-5326-883a-8f7df87684bc", 00:21:48.992 "is_configured": true, 00:21:48.992 "data_offset": 0, 00:21:48.992 "data_size": 65536 00:21:48.992 }, 00:21:48.992 { 00:21:48.992 "name": "BaseBdev2", 00:21:48.992 "uuid": "18285108-c840-5d21-85d4-c7d64dfc2b8d", 00:21:48.992 "is_configured": true, 00:21:48.992 "data_offset": 0, 00:21:48.992 "data_size": 65536 00:21:48.992 }, 00:21:48.992 { 00:21:48.992 "name": "BaseBdev3", 00:21:48.992 "uuid": "505e1c40-f3f4-5f51-95a2-97fe9783129c", 00:21:48.992 "is_configured": true, 00:21:48.992 "data_offset": 0, 00:21:48.992 "data_size": 65536 00:21:48.992 }, 00:21:48.992 { 00:21:48.992 "name": "BaseBdev4", 00:21:48.992 "uuid": "852bc8a2-4acc-5ded-9b37-6b01b114bc07", 00:21:48.992 "is_configured": true, 00:21:48.992 "data_offset": 0, 00:21:48.992 "data_size": 65536 00:21:48.992 } 00:21:48.992 ] 00:21:48.992 }' 00:21:48.992 13:41:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:48.992 13:41:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:49.251 13:41:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:21:49.251 13:41:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:49.251 13:41:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:21:49.251 13:41:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:49.251 [2024-11-20 13:41:48.668628] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:49.251 13:41:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:49.251 13:41:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=196608 00:21:49.251 13:41:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:49.251 13:41:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:49.251 13:41:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:49.251 13:41:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:21:49.251 13:41:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:49.510 13:41:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:21:49.510 13:41:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:21:49.510 13:41:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:21:49.510 13:41:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:21:49.510 13:41:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:21:49.510 13:41:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:21:49.510 13:41:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:21:49.510 13:41:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:49.510 13:41:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:21:49.510 13:41:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:49.510 13:41:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:21:49.510 13:41:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:49.510 13:41:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:49.510 13:41:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:21:49.510 [2024-11-20 13:41:48.944261] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:21:49.510 /dev/nbd0 00:21:49.510 13:41:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:21:49.770 13:41:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:21:49.770 13:41:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:21:49.770 13:41:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:21:49.770 13:41:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:21:49.770 13:41:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:21:49.770 13:41:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:21:49.770 13:41:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:21:49.770 13:41:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:21:49.770 13:41:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:21:49.770 13:41:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:49.770 1+0 records in 00:21:49.770 1+0 records out 00:21:49.770 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000285402 s, 14.4 MB/s 00:21:49.770 13:41:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:49.770 13:41:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:21:49.770 13:41:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:49.770 13:41:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:21:49.770 13:41:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:21:49.770 13:41:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:49.770 13:41:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:49.770 13:41:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:21:49.770 13:41:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 00:21:49.770 13:41:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 192 00:21:49.770 13:41:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=512 oflag=direct 00:21:50.338 512+0 records in 00:21:50.338 512+0 records out 00:21:50.338 100663296 bytes (101 MB, 96 MiB) copied, 0.493367 s, 204 MB/s 00:21:50.338 13:41:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:21:50.338 13:41:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:21:50.338 13:41:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:21:50.338 13:41:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:50.338 13:41:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:21:50.338 13:41:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:50.338 13:41:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:21:50.338 13:41:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:21:50.338 [2024-11-20 13:41:49.753517] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:50.338 13:41:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:21:50.338 13:41:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:21:50.338 13:41:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:50.338 13:41:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:50.338 13:41:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:21:50.338 13:41:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:21:50.338 13:41:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:21:50.338 13:41:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:21:50.338 13:41:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:50.338 13:41:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:50.338 [2024-11-20 13:41:49.766398] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:50.338 13:41:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:50.338 13:41:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:21:50.338 13:41:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:50.338 13:41:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:50.338 13:41:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:50.338 13:41:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:50.338 13:41:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:50.338 13:41:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:50.338 13:41:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:50.339 13:41:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:50.339 13:41:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:50.339 13:41:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:50.339 13:41:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:50.339 13:41:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:50.339 13:41:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:50.339 13:41:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:50.339 13:41:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:50.339 "name": "raid_bdev1", 00:21:50.339 "uuid": "cbe2cc01-36df-4784-861d-6e351a824a8b", 00:21:50.339 "strip_size_kb": 64, 00:21:50.339 "state": "online", 00:21:50.339 "raid_level": "raid5f", 00:21:50.339 "superblock": false, 00:21:50.339 "num_base_bdevs": 4, 00:21:50.339 "num_base_bdevs_discovered": 3, 00:21:50.339 "num_base_bdevs_operational": 3, 00:21:50.339 "base_bdevs_list": [ 00:21:50.339 { 00:21:50.339 "name": null, 00:21:50.339 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:50.339 "is_configured": false, 00:21:50.339 "data_offset": 0, 00:21:50.339 "data_size": 65536 00:21:50.339 }, 00:21:50.339 { 00:21:50.339 "name": "BaseBdev2", 00:21:50.339 "uuid": "18285108-c840-5d21-85d4-c7d64dfc2b8d", 00:21:50.339 "is_configured": true, 00:21:50.339 "data_offset": 0, 00:21:50.339 "data_size": 65536 00:21:50.339 }, 00:21:50.339 { 00:21:50.339 "name": "BaseBdev3", 00:21:50.339 "uuid": "505e1c40-f3f4-5f51-95a2-97fe9783129c", 00:21:50.339 "is_configured": true, 00:21:50.339 "data_offset": 0, 00:21:50.339 "data_size": 65536 00:21:50.339 }, 00:21:50.339 { 00:21:50.339 "name": "BaseBdev4", 00:21:50.339 "uuid": "852bc8a2-4acc-5ded-9b37-6b01b114bc07", 00:21:50.339 "is_configured": true, 00:21:50.339 "data_offset": 0, 00:21:50.339 "data_size": 65536 00:21:50.339 } 00:21:50.339 ] 00:21:50.339 }' 00:21:50.339 13:41:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:50.339 13:41:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:50.906 13:41:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:21:50.906 13:41:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:50.906 13:41:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:50.906 [2024-11-20 13:41:50.134034] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:50.906 [2024-11-20 13:41:50.150036] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b750 00:21:50.906 13:41:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:50.906 13:41:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:21:50.906 [2024-11-20 13:41:50.159928] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:51.844 13:41:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:51.844 13:41:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:51.844 13:41:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:51.844 13:41:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:51.844 13:41:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:51.844 13:41:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:51.844 13:41:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:51.844 13:41:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:51.844 13:41:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:51.844 13:41:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:51.844 13:41:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:51.844 "name": "raid_bdev1", 00:21:51.844 "uuid": "cbe2cc01-36df-4784-861d-6e351a824a8b", 00:21:51.844 "strip_size_kb": 64, 00:21:51.844 "state": "online", 00:21:51.844 "raid_level": "raid5f", 00:21:51.844 "superblock": false, 00:21:51.844 "num_base_bdevs": 4, 00:21:51.844 "num_base_bdevs_discovered": 4, 00:21:51.844 "num_base_bdevs_operational": 4, 00:21:51.844 "process": { 00:21:51.844 "type": "rebuild", 00:21:51.844 "target": "spare", 00:21:51.844 "progress": { 00:21:51.844 "blocks": 19200, 00:21:51.844 "percent": 9 00:21:51.844 } 00:21:51.844 }, 00:21:51.844 "base_bdevs_list": [ 00:21:51.844 { 00:21:51.844 "name": "spare", 00:21:51.844 "uuid": "88ae6e3b-87d4-5c42-92b1-1814896cfac0", 00:21:51.844 "is_configured": true, 00:21:51.844 "data_offset": 0, 00:21:51.844 "data_size": 65536 00:21:51.844 }, 00:21:51.844 { 00:21:51.844 "name": "BaseBdev2", 00:21:51.844 "uuid": "18285108-c840-5d21-85d4-c7d64dfc2b8d", 00:21:51.844 "is_configured": true, 00:21:51.844 "data_offset": 0, 00:21:51.844 "data_size": 65536 00:21:51.844 }, 00:21:51.844 { 00:21:51.844 "name": "BaseBdev3", 00:21:51.844 "uuid": "505e1c40-f3f4-5f51-95a2-97fe9783129c", 00:21:51.844 "is_configured": true, 00:21:51.844 "data_offset": 0, 00:21:51.844 "data_size": 65536 00:21:51.844 }, 00:21:51.844 { 00:21:51.844 "name": "BaseBdev4", 00:21:51.844 "uuid": "852bc8a2-4acc-5ded-9b37-6b01b114bc07", 00:21:51.844 "is_configured": true, 00:21:51.844 "data_offset": 0, 00:21:51.844 "data_size": 65536 00:21:51.844 } 00:21:51.844 ] 00:21:51.844 }' 00:21:51.844 13:41:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:51.844 13:41:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:51.844 13:41:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:51.844 13:41:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:51.845 13:41:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:21:51.845 13:41:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:51.845 13:41:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:51.845 [2024-11-20 13:41:51.303479] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:52.104 [2024-11-20 13:41:51.368523] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:21:52.104 [2024-11-20 13:41:51.368619] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:52.104 [2024-11-20 13:41:51.368638] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:52.104 [2024-11-20 13:41:51.368650] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:21:52.104 13:41:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:52.104 13:41:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:21:52.104 13:41:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:52.104 13:41:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:52.104 13:41:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:52.104 13:41:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:52.104 13:41:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:52.104 13:41:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:52.104 13:41:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:52.104 13:41:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:52.104 13:41:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:52.104 13:41:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:52.104 13:41:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:52.104 13:41:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:52.104 13:41:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:52.104 13:41:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:52.104 13:41:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:52.104 "name": "raid_bdev1", 00:21:52.104 "uuid": "cbe2cc01-36df-4784-861d-6e351a824a8b", 00:21:52.104 "strip_size_kb": 64, 00:21:52.104 "state": "online", 00:21:52.104 "raid_level": "raid5f", 00:21:52.104 "superblock": false, 00:21:52.104 "num_base_bdevs": 4, 00:21:52.104 "num_base_bdevs_discovered": 3, 00:21:52.104 "num_base_bdevs_operational": 3, 00:21:52.104 "base_bdevs_list": [ 00:21:52.104 { 00:21:52.104 "name": null, 00:21:52.104 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:52.104 "is_configured": false, 00:21:52.104 "data_offset": 0, 00:21:52.104 "data_size": 65536 00:21:52.104 }, 00:21:52.104 { 00:21:52.104 "name": "BaseBdev2", 00:21:52.104 "uuid": "18285108-c840-5d21-85d4-c7d64dfc2b8d", 00:21:52.104 "is_configured": true, 00:21:52.104 "data_offset": 0, 00:21:52.104 "data_size": 65536 00:21:52.104 }, 00:21:52.104 { 00:21:52.104 "name": "BaseBdev3", 00:21:52.104 "uuid": "505e1c40-f3f4-5f51-95a2-97fe9783129c", 00:21:52.104 "is_configured": true, 00:21:52.104 "data_offset": 0, 00:21:52.104 "data_size": 65536 00:21:52.104 }, 00:21:52.104 { 00:21:52.104 "name": "BaseBdev4", 00:21:52.104 "uuid": "852bc8a2-4acc-5ded-9b37-6b01b114bc07", 00:21:52.104 "is_configured": true, 00:21:52.104 "data_offset": 0, 00:21:52.104 "data_size": 65536 00:21:52.104 } 00:21:52.104 ] 00:21:52.104 }' 00:21:52.104 13:41:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:52.104 13:41:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:52.364 13:41:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:52.364 13:41:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:52.364 13:41:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:52.364 13:41:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:52.364 13:41:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:52.364 13:41:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:52.364 13:41:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:52.364 13:41:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:52.364 13:41:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:52.364 13:41:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:52.624 13:41:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:52.624 "name": "raid_bdev1", 00:21:52.624 "uuid": "cbe2cc01-36df-4784-861d-6e351a824a8b", 00:21:52.624 "strip_size_kb": 64, 00:21:52.624 "state": "online", 00:21:52.624 "raid_level": "raid5f", 00:21:52.624 "superblock": false, 00:21:52.624 "num_base_bdevs": 4, 00:21:52.624 "num_base_bdevs_discovered": 3, 00:21:52.624 "num_base_bdevs_operational": 3, 00:21:52.624 "base_bdevs_list": [ 00:21:52.624 { 00:21:52.624 "name": null, 00:21:52.624 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:52.624 "is_configured": false, 00:21:52.624 "data_offset": 0, 00:21:52.624 "data_size": 65536 00:21:52.624 }, 00:21:52.624 { 00:21:52.624 "name": "BaseBdev2", 00:21:52.624 "uuid": "18285108-c840-5d21-85d4-c7d64dfc2b8d", 00:21:52.624 "is_configured": true, 00:21:52.624 "data_offset": 0, 00:21:52.624 "data_size": 65536 00:21:52.624 }, 00:21:52.624 { 00:21:52.624 "name": "BaseBdev3", 00:21:52.624 "uuid": "505e1c40-f3f4-5f51-95a2-97fe9783129c", 00:21:52.624 "is_configured": true, 00:21:52.624 "data_offset": 0, 00:21:52.624 "data_size": 65536 00:21:52.624 }, 00:21:52.624 { 00:21:52.624 "name": "BaseBdev4", 00:21:52.624 "uuid": "852bc8a2-4acc-5ded-9b37-6b01b114bc07", 00:21:52.624 "is_configured": true, 00:21:52.624 "data_offset": 0, 00:21:52.624 "data_size": 65536 00:21:52.624 } 00:21:52.624 ] 00:21:52.624 }' 00:21:52.624 13:41:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:52.624 13:41:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:52.624 13:41:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:52.624 13:41:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:52.624 13:41:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:21:52.624 13:41:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:52.624 13:41:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:52.624 [2024-11-20 13:41:51.943856] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:52.624 [2024-11-20 13:41:51.959227] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b820 00:21:52.624 13:41:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:52.624 13:41:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:21:52.624 [2024-11-20 13:41:51.968707] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:53.562 13:41:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:53.562 13:41:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:53.562 13:41:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:53.562 13:41:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:53.562 13:41:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:53.562 13:41:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:53.562 13:41:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:53.562 13:41:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:53.562 13:41:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:53.562 13:41:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:53.562 13:41:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:53.562 "name": "raid_bdev1", 00:21:53.562 "uuid": "cbe2cc01-36df-4784-861d-6e351a824a8b", 00:21:53.562 "strip_size_kb": 64, 00:21:53.562 "state": "online", 00:21:53.562 "raid_level": "raid5f", 00:21:53.562 "superblock": false, 00:21:53.562 "num_base_bdevs": 4, 00:21:53.562 "num_base_bdevs_discovered": 4, 00:21:53.562 "num_base_bdevs_operational": 4, 00:21:53.562 "process": { 00:21:53.562 "type": "rebuild", 00:21:53.562 "target": "spare", 00:21:53.562 "progress": { 00:21:53.562 "blocks": 19200, 00:21:53.562 "percent": 9 00:21:53.562 } 00:21:53.562 }, 00:21:53.562 "base_bdevs_list": [ 00:21:53.562 { 00:21:53.562 "name": "spare", 00:21:53.562 "uuid": "88ae6e3b-87d4-5c42-92b1-1814896cfac0", 00:21:53.562 "is_configured": true, 00:21:53.562 "data_offset": 0, 00:21:53.562 "data_size": 65536 00:21:53.562 }, 00:21:53.562 { 00:21:53.562 "name": "BaseBdev2", 00:21:53.562 "uuid": "18285108-c840-5d21-85d4-c7d64dfc2b8d", 00:21:53.562 "is_configured": true, 00:21:53.562 "data_offset": 0, 00:21:53.562 "data_size": 65536 00:21:53.562 }, 00:21:53.562 { 00:21:53.562 "name": "BaseBdev3", 00:21:53.562 "uuid": "505e1c40-f3f4-5f51-95a2-97fe9783129c", 00:21:53.562 "is_configured": true, 00:21:53.562 "data_offset": 0, 00:21:53.562 "data_size": 65536 00:21:53.562 }, 00:21:53.562 { 00:21:53.562 "name": "BaseBdev4", 00:21:53.562 "uuid": "852bc8a2-4acc-5ded-9b37-6b01b114bc07", 00:21:53.562 "is_configured": true, 00:21:53.562 "data_offset": 0, 00:21:53.562 "data_size": 65536 00:21:53.562 } 00:21:53.562 ] 00:21:53.562 }' 00:21:53.562 13:41:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:53.824 13:41:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:53.824 13:41:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:53.824 13:41:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:53.824 13:41:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:21:53.824 13:41:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:21:53.824 13:41:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:21:53.824 13:41:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=621 00:21:53.824 13:41:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:21:53.824 13:41:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:53.824 13:41:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:53.824 13:41:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:53.824 13:41:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:53.824 13:41:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:53.824 13:41:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:53.824 13:41:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:53.824 13:41:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:53.824 13:41:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:53.824 13:41:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:53.824 13:41:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:53.824 "name": "raid_bdev1", 00:21:53.824 "uuid": "cbe2cc01-36df-4784-861d-6e351a824a8b", 00:21:53.824 "strip_size_kb": 64, 00:21:53.824 "state": "online", 00:21:53.824 "raid_level": "raid5f", 00:21:53.824 "superblock": false, 00:21:53.824 "num_base_bdevs": 4, 00:21:53.824 "num_base_bdevs_discovered": 4, 00:21:53.824 "num_base_bdevs_operational": 4, 00:21:53.824 "process": { 00:21:53.824 "type": "rebuild", 00:21:53.824 "target": "spare", 00:21:53.824 "progress": { 00:21:53.824 "blocks": 21120, 00:21:53.824 "percent": 10 00:21:53.824 } 00:21:53.824 }, 00:21:53.824 "base_bdevs_list": [ 00:21:53.824 { 00:21:53.824 "name": "spare", 00:21:53.824 "uuid": "88ae6e3b-87d4-5c42-92b1-1814896cfac0", 00:21:53.824 "is_configured": true, 00:21:53.824 "data_offset": 0, 00:21:53.824 "data_size": 65536 00:21:53.824 }, 00:21:53.824 { 00:21:53.824 "name": "BaseBdev2", 00:21:53.824 "uuid": "18285108-c840-5d21-85d4-c7d64dfc2b8d", 00:21:53.824 "is_configured": true, 00:21:53.824 "data_offset": 0, 00:21:53.824 "data_size": 65536 00:21:53.824 }, 00:21:53.824 { 00:21:53.824 "name": "BaseBdev3", 00:21:53.824 "uuid": "505e1c40-f3f4-5f51-95a2-97fe9783129c", 00:21:53.824 "is_configured": true, 00:21:53.824 "data_offset": 0, 00:21:53.824 "data_size": 65536 00:21:53.824 }, 00:21:53.824 { 00:21:53.824 "name": "BaseBdev4", 00:21:53.824 "uuid": "852bc8a2-4acc-5ded-9b37-6b01b114bc07", 00:21:53.824 "is_configured": true, 00:21:53.824 "data_offset": 0, 00:21:53.824 "data_size": 65536 00:21:53.824 } 00:21:53.824 ] 00:21:53.824 }' 00:21:53.824 13:41:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:53.824 13:41:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:53.824 13:41:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:53.824 13:41:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:53.824 13:41:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:21:54.781 13:41:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:21:54.781 13:41:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:54.781 13:41:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:54.781 13:41:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:54.781 13:41:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:54.781 13:41:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:54.781 13:41:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:54.781 13:41:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:54.781 13:41:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:54.781 13:41:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:54.781 13:41:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:55.051 13:41:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:55.051 "name": "raid_bdev1", 00:21:55.051 "uuid": "cbe2cc01-36df-4784-861d-6e351a824a8b", 00:21:55.051 "strip_size_kb": 64, 00:21:55.051 "state": "online", 00:21:55.051 "raid_level": "raid5f", 00:21:55.051 "superblock": false, 00:21:55.051 "num_base_bdevs": 4, 00:21:55.051 "num_base_bdevs_discovered": 4, 00:21:55.051 "num_base_bdevs_operational": 4, 00:21:55.051 "process": { 00:21:55.051 "type": "rebuild", 00:21:55.051 "target": "spare", 00:21:55.051 "progress": { 00:21:55.051 "blocks": 42240, 00:21:55.051 "percent": 21 00:21:55.051 } 00:21:55.051 }, 00:21:55.051 "base_bdevs_list": [ 00:21:55.051 { 00:21:55.051 "name": "spare", 00:21:55.051 "uuid": "88ae6e3b-87d4-5c42-92b1-1814896cfac0", 00:21:55.051 "is_configured": true, 00:21:55.051 "data_offset": 0, 00:21:55.051 "data_size": 65536 00:21:55.051 }, 00:21:55.051 { 00:21:55.051 "name": "BaseBdev2", 00:21:55.051 "uuid": "18285108-c840-5d21-85d4-c7d64dfc2b8d", 00:21:55.051 "is_configured": true, 00:21:55.051 "data_offset": 0, 00:21:55.051 "data_size": 65536 00:21:55.051 }, 00:21:55.051 { 00:21:55.051 "name": "BaseBdev3", 00:21:55.051 "uuid": "505e1c40-f3f4-5f51-95a2-97fe9783129c", 00:21:55.051 "is_configured": true, 00:21:55.051 "data_offset": 0, 00:21:55.051 "data_size": 65536 00:21:55.051 }, 00:21:55.051 { 00:21:55.051 "name": "BaseBdev4", 00:21:55.051 "uuid": "852bc8a2-4acc-5ded-9b37-6b01b114bc07", 00:21:55.051 "is_configured": true, 00:21:55.051 "data_offset": 0, 00:21:55.051 "data_size": 65536 00:21:55.051 } 00:21:55.051 ] 00:21:55.051 }' 00:21:55.051 13:41:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:55.051 13:41:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:55.051 13:41:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:55.051 13:41:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:55.051 13:41:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:21:56.006 13:41:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:21:56.006 13:41:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:56.006 13:41:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:56.006 13:41:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:56.006 13:41:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:56.006 13:41:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:56.006 13:41:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:56.006 13:41:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:56.006 13:41:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:56.006 13:41:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:56.006 13:41:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:56.006 13:41:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:56.006 "name": "raid_bdev1", 00:21:56.006 "uuid": "cbe2cc01-36df-4784-861d-6e351a824a8b", 00:21:56.006 "strip_size_kb": 64, 00:21:56.006 "state": "online", 00:21:56.006 "raid_level": "raid5f", 00:21:56.006 "superblock": false, 00:21:56.006 "num_base_bdevs": 4, 00:21:56.006 "num_base_bdevs_discovered": 4, 00:21:56.006 "num_base_bdevs_operational": 4, 00:21:56.006 "process": { 00:21:56.006 "type": "rebuild", 00:21:56.006 "target": "spare", 00:21:56.006 "progress": { 00:21:56.006 "blocks": 63360, 00:21:56.006 "percent": 32 00:21:56.006 } 00:21:56.006 }, 00:21:56.006 "base_bdevs_list": [ 00:21:56.006 { 00:21:56.006 "name": "spare", 00:21:56.006 "uuid": "88ae6e3b-87d4-5c42-92b1-1814896cfac0", 00:21:56.006 "is_configured": true, 00:21:56.006 "data_offset": 0, 00:21:56.006 "data_size": 65536 00:21:56.006 }, 00:21:56.006 { 00:21:56.006 "name": "BaseBdev2", 00:21:56.006 "uuid": "18285108-c840-5d21-85d4-c7d64dfc2b8d", 00:21:56.006 "is_configured": true, 00:21:56.006 "data_offset": 0, 00:21:56.006 "data_size": 65536 00:21:56.006 }, 00:21:56.006 { 00:21:56.006 "name": "BaseBdev3", 00:21:56.006 "uuid": "505e1c40-f3f4-5f51-95a2-97fe9783129c", 00:21:56.006 "is_configured": true, 00:21:56.006 "data_offset": 0, 00:21:56.006 "data_size": 65536 00:21:56.006 }, 00:21:56.006 { 00:21:56.006 "name": "BaseBdev4", 00:21:56.006 "uuid": "852bc8a2-4acc-5ded-9b37-6b01b114bc07", 00:21:56.006 "is_configured": true, 00:21:56.006 "data_offset": 0, 00:21:56.006 "data_size": 65536 00:21:56.006 } 00:21:56.006 ] 00:21:56.006 }' 00:21:56.006 13:41:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:56.006 13:41:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:56.006 13:41:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:56.265 13:41:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:56.265 13:41:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:21:57.199 13:41:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:21:57.199 13:41:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:57.199 13:41:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:57.199 13:41:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:57.199 13:41:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:57.199 13:41:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:57.199 13:41:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:57.199 13:41:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:57.199 13:41:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:57.199 13:41:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:57.199 13:41:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:57.199 13:41:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:57.199 "name": "raid_bdev1", 00:21:57.199 "uuid": "cbe2cc01-36df-4784-861d-6e351a824a8b", 00:21:57.199 "strip_size_kb": 64, 00:21:57.199 "state": "online", 00:21:57.199 "raid_level": "raid5f", 00:21:57.199 "superblock": false, 00:21:57.199 "num_base_bdevs": 4, 00:21:57.199 "num_base_bdevs_discovered": 4, 00:21:57.199 "num_base_bdevs_operational": 4, 00:21:57.199 "process": { 00:21:57.199 "type": "rebuild", 00:21:57.199 "target": "spare", 00:21:57.199 "progress": { 00:21:57.199 "blocks": 86400, 00:21:57.199 "percent": 43 00:21:57.199 } 00:21:57.199 }, 00:21:57.199 "base_bdevs_list": [ 00:21:57.199 { 00:21:57.199 "name": "spare", 00:21:57.199 "uuid": "88ae6e3b-87d4-5c42-92b1-1814896cfac0", 00:21:57.199 "is_configured": true, 00:21:57.199 "data_offset": 0, 00:21:57.199 "data_size": 65536 00:21:57.199 }, 00:21:57.199 { 00:21:57.199 "name": "BaseBdev2", 00:21:57.199 "uuid": "18285108-c840-5d21-85d4-c7d64dfc2b8d", 00:21:57.199 "is_configured": true, 00:21:57.199 "data_offset": 0, 00:21:57.199 "data_size": 65536 00:21:57.199 }, 00:21:57.199 { 00:21:57.199 "name": "BaseBdev3", 00:21:57.199 "uuid": "505e1c40-f3f4-5f51-95a2-97fe9783129c", 00:21:57.199 "is_configured": true, 00:21:57.199 "data_offset": 0, 00:21:57.199 "data_size": 65536 00:21:57.199 }, 00:21:57.199 { 00:21:57.199 "name": "BaseBdev4", 00:21:57.199 "uuid": "852bc8a2-4acc-5ded-9b37-6b01b114bc07", 00:21:57.199 "is_configured": true, 00:21:57.199 "data_offset": 0, 00:21:57.199 "data_size": 65536 00:21:57.199 } 00:21:57.199 ] 00:21:57.199 }' 00:21:57.199 13:41:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:57.199 13:41:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:57.199 13:41:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:57.199 13:41:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:57.199 13:41:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:21:58.142 13:41:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:21:58.142 13:41:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:58.142 13:41:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:58.142 13:41:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:58.142 13:41:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:58.142 13:41:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:58.142 13:41:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:58.142 13:41:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:58.142 13:41:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:58.142 13:41:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:58.401 13:41:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:58.401 13:41:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:58.401 "name": "raid_bdev1", 00:21:58.401 "uuid": "cbe2cc01-36df-4784-861d-6e351a824a8b", 00:21:58.401 "strip_size_kb": 64, 00:21:58.401 "state": "online", 00:21:58.401 "raid_level": "raid5f", 00:21:58.401 "superblock": false, 00:21:58.401 "num_base_bdevs": 4, 00:21:58.401 "num_base_bdevs_discovered": 4, 00:21:58.401 "num_base_bdevs_operational": 4, 00:21:58.401 "process": { 00:21:58.401 "type": "rebuild", 00:21:58.401 "target": "spare", 00:21:58.401 "progress": { 00:21:58.401 "blocks": 107520, 00:21:58.401 "percent": 54 00:21:58.401 } 00:21:58.401 }, 00:21:58.401 "base_bdevs_list": [ 00:21:58.401 { 00:21:58.401 "name": "spare", 00:21:58.401 "uuid": "88ae6e3b-87d4-5c42-92b1-1814896cfac0", 00:21:58.401 "is_configured": true, 00:21:58.401 "data_offset": 0, 00:21:58.401 "data_size": 65536 00:21:58.401 }, 00:21:58.401 { 00:21:58.401 "name": "BaseBdev2", 00:21:58.401 "uuid": "18285108-c840-5d21-85d4-c7d64dfc2b8d", 00:21:58.401 "is_configured": true, 00:21:58.401 "data_offset": 0, 00:21:58.401 "data_size": 65536 00:21:58.401 }, 00:21:58.401 { 00:21:58.401 "name": "BaseBdev3", 00:21:58.401 "uuid": "505e1c40-f3f4-5f51-95a2-97fe9783129c", 00:21:58.401 "is_configured": true, 00:21:58.401 "data_offset": 0, 00:21:58.401 "data_size": 65536 00:21:58.401 }, 00:21:58.401 { 00:21:58.401 "name": "BaseBdev4", 00:21:58.401 "uuid": "852bc8a2-4acc-5ded-9b37-6b01b114bc07", 00:21:58.401 "is_configured": true, 00:21:58.401 "data_offset": 0, 00:21:58.401 "data_size": 65536 00:21:58.401 } 00:21:58.401 ] 00:21:58.401 }' 00:21:58.401 13:41:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:58.401 13:41:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:58.401 13:41:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:58.401 13:41:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:58.401 13:41:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:21:59.337 13:41:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:21:59.337 13:41:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:59.337 13:41:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:59.337 13:41:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:59.337 13:41:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:59.337 13:41:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:59.337 13:41:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:59.337 13:41:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.338 13:41:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:59.338 13:41:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:59.338 13:41:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.338 13:41:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:59.338 "name": "raid_bdev1", 00:21:59.338 "uuid": "cbe2cc01-36df-4784-861d-6e351a824a8b", 00:21:59.338 "strip_size_kb": 64, 00:21:59.338 "state": "online", 00:21:59.338 "raid_level": "raid5f", 00:21:59.338 "superblock": false, 00:21:59.338 "num_base_bdevs": 4, 00:21:59.338 "num_base_bdevs_discovered": 4, 00:21:59.338 "num_base_bdevs_operational": 4, 00:21:59.338 "process": { 00:21:59.338 "type": "rebuild", 00:21:59.338 "target": "spare", 00:21:59.338 "progress": { 00:21:59.338 "blocks": 128640, 00:21:59.338 "percent": 65 00:21:59.338 } 00:21:59.338 }, 00:21:59.338 "base_bdevs_list": [ 00:21:59.338 { 00:21:59.338 "name": "spare", 00:21:59.338 "uuid": "88ae6e3b-87d4-5c42-92b1-1814896cfac0", 00:21:59.338 "is_configured": true, 00:21:59.338 "data_offset": 0, 00:21:59.338 "data_size": 65536 00:21:59.338 }, 00:21:59.338 { 00:21:59.338 "name": "BaseBdev2", 00:21:59.338 "uuid": "18285108-c840-5d21-85d4-c7d64dfc2b8d", 00:21:59.338 "is_configured": true, 00:21:59.338 "data_offset": 0, 00:21:59.338 "data_size": 65536 00:21:59.338 }, 00:21:59.338 { 00:21:59.338 "name": "BaseBdev3", 00:21:59.338 "uuid": "505e1c40-f3f4-5f51-95a2-97fe9783129c", 00:21:59.338 "is_configured": true, 00:21:59.338 "data_offset": 0, 00:21:59.338 "data_size": 65536 00:21:59.338 }, 00:21:59.338 { 00:21:59.338 "name": "BaseBdev4", 00:21:59.338 "uuid": "852bc8a2-4acc-5ded-9b37-6b01b114bc07", 00:21:59.338 "is_configured": true, 00:21:59.338 "data_offset": 0, 00:21:59.338 "data_size": 65536 00:21:59.338 } 00:21:59.338 ] 00:21:59.338 }' 00:21:59.338 13:41:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:59.596 13:41:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:59.596 13:41:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:59.596 13:41:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:59.596 13:41:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:22:00.534 13:41:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:22:00.534 13:41:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:00.534 13:41:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:00.534 13:41:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:00.534 13:41:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:00.534 13:41:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:00.534 13:41:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:00.534 13:41:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:00.534 13:41:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:00.534 13:41:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:00.534 13:41:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:00.534 13:41:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:00.534 "name": "raid_bdev1", 00:22:00.534 "uuid": "cbe2cc01-36df-4784-861d-6e351a824a8b", 00:22:00.534 "strip_size_kb": 64, 00:22:00.534 "state": "online", 00:22:00.534 "raid_level": "raid5f", 00:22:00.534 "superblock": false, 00:22:00.534 "num_base_bdevs": 4, 00:22:00.534 "num_base_bdevs_discovered": 4, 00:22:00.534 "num_base_bdevs_operational": 4, 00:22:00.534 "process": { 00:22:00.534 "type": "rebuild", 00:22:00.534 "target": "spare", 00:22:00.534 "progress": { 00:22:00.534 "blocks": 149760, 00:22:00.534 "percent": 76 00:22:00.534 } 00:22:00.534 }, 00:22:00.534 "base_bdevs_list": [ 00:22:00.534 { 00:22:00.534 "name": "spare", 00:22:00.534 "uuid": "88ae6e3b-87d4-5c42-92b1-1814896cfac0", 00:22:00.534 "is_configured": true, 00:22:00.534 "data_offset": 0, 00:22:00.534 "data_size": 65536 00:22:00.534 }, 00:22:00.534 { 00:22:00.534 "name": "BaseBdev2", 00:22:00.534 "uuid": "18285108-c840-5d21-85d4-c7d64dfc2b8d", 00:22:00.534 "is_configured": true, 00:22:00.534 "data_offset": 0, 00:22:00.534 "data_size": 65536 00:22:00.534 }, 00:22:00.534 { 00:22:00.534 "name": "BaseBdev3", 00:22:00.534 "uuid": "505e1c40-f3f4-5f51-95a2-97fe9783129c", 00:22:00.534 "is_configured": true, 00:22:00.534 "data_offset": 0, 00:22:00.534 "data_size": 65536 00:22:00.534 }, 00:22:00.534 { 00:22:00.534 "name": "BaseBdev4", 00:22:00.534 "uuid": "852bc8a2-4acc-5ded-9b37-6b01b114bc07", 00:22:00.534 "is_configured": true, 00:22:00.534 "data_offset": 0, 00:22:00.534 "data_size": 65536 00:22:00.534 } 00:22:00.534 ] 00:22:00.534 }' 00:22:00.534 13:41:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:00.534 13:41:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:00.534 13:41:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:00.534 13:42:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:22:00.534 13:42:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:22:01.933 13:42:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:22:01.933 13:42:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:01.933 13:42:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:01.933 13:42:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:01.933 13:42:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:01.933 13:42:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:01.933 13:42:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:01.933 13:42:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:01.933 13:42:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:01.933 13:42:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:01.933 13:42:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:01.933 13:42:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:01.933 "name": "raid_bdev1", 00:22:01.933 "uuid": "cbe2cc01-36df-4784-861d-6e351a824a8b", 00:22:01.933 "strip_size_kb": 64, 00:22:01.933 "state": "online", 00:22:01.933 "raid_level": "raid5f", 00:22:01.933 "superblock": false, 00:22:01.933 "num_base_bdevs": 4, 00:22:01.933 "num_base_bdevs_discovered": 4, 00:22:01.933 "num_base_bdevs_operational": 4, 00:22:01.933 "process": { 00:22:01.933 "type": "rebuild", 00:22:01.933 "target": "spare", 00:22:01.933 "progress": { 00:22:01.933 "blocks": 170880, 00:22:01.933 "percent": 86 00:22:01.933 } 00:22:01.933 }, 00:22:01.933 "base_bdevs_list": [ 00:22:01.933 { 00:22:01.933 "name": "spare", 00:22:01.933 "uuid": "88ae6e3b-87d4-5c42-92b1-1814896cfac0", 00:22:01.933 "is_configured": true, 00:22:01.933 "data_offset": 0, 00:22:01.933 "data_size": 65536 00:22:01.933 }, 00:22:01.933 { 00:22:01.933 "name": "BaseBdev2", 00:22:01.933 "uuid": "18285108-c840-5d21-85d4-c7d64dfc2b8d", 00:22:01.933 "is_configured": true, 00:22:01.933 "data_offset": 0, 00:22:01.933 "data_size": 65536 00:22:01.933 }, 00:22:01.933 { 00:22:01.933 "name": "BaseBdev3", 00:22:01.933 "uuid": "505e1c40-f3f4-5f51-95a2-97fe9783129c", 00:22:01.933 "is_configured": true, 00:22:01.933 "data_offset": 0, 00:22:01.933 "data_size": 65536 00:22:01.933 }, 00:22:01.933 { 00:22:01.933 "name": "BaseBdev4", 00:22:01.933 "uuid": "852bc8a2-4acc-5ded-9b37-6b01b114bc07", 00:22:01.933 "is_configured": true, 00:22:01.933 "data_offset": 0, 00:22:01.933 "data_size": 65536 00:22:01.933 } 00:22:01.933 ] 00:22:01.933 }' 00:22:01.933 13:42:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:01.933 13:42:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:01.933 13:42:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:01.933 13:42:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:22:01.933 13:42:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:22:02.870 13:42:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:22:02.870 13:42:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:02.870 13:42:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:02.870 13:42:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:02.870 13:42:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:02.870 13:42:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:02.870 13:42:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:02.870 13:42:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.870 13:42:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:02.870 13:42:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:02.870 13:42:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.870 13:42:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:02.870 "name": "raid_bdev1", 00:22:02.870 "uuid": "cbe2cc01-36df-4784-861d-6e351a824a8b", 00:22:02.870 "strip_size_kb": 64, 00:22:02.870 "state": "online", 00:22:02.870 "raid_level": "raid5f", 00:22:02.870 "superblock": false, 00:22:02.870 "num_base_bdevs": 4, 00:22:02.870 "num_base_bdevs_discovered": 4, 00:22:02.870 "num_base_bdevs_operational": 4, 00:22:02.870 "process": { 00:22:02.870 "type": "rebuild", 00:22:02.870 "target": "spare", 00:22:02.870 "progress": { 00:22:02.870 "blocks": 193920, 00:22:02.870 "percent": 98 00:22:02.870 } 00:22:02.870 }, 00:22:02.870 "base_bdevs_list": [ 00:22:02.870 { 00:22:02.870 "name": "spare", 00:22:02.870 "uuid": "88ae6e3b-87d4-5c42-92b1-1814896cfac0", 00:22:02.870 "is_configured": true, 00:22:02.870 "data_offset": 0, 00:22:02.870 "data_size": 65536 00:22:02.870 }, 00:22:02.870 { 00:22:02.870 "name": "BaseBdev2", 00:22:02.870 "uuid": "18285108-c840-5d21-85d4-c7d64dfc2b8d", 00:22:02.870 "is_configured": true, 00:22:02.870 "data_offset": 0, 00:22:02.870 "data_size": 65536 00:22:02.870 }, 00:22:02.870 { 00:22:02.870 "name": "BaseBdev3", 00:22:02.870 "uuid": "505e1c40-f3f4-5f51-95a2-97fe9783129c", 00:22:02.870 "is_configured": true, 00:22:02.870 "data_offset": 0, 00:22:02.870 "data_size": 65536 00:22:02.870 }, 00:22:02.870 { 00:22:02.870 "name": "BaseBdev4", 00:22:02.870 "uuid": "852bc8a2-4acc-5ded-9b37-6b01b114bc07", 00:22:02.870 "is_configured": true, 00:22:02.870 "data_offset": 0, 00:22:02.870 "data_size": 65536 00:22:02.870 } 00:22:02.870 ] 00:22:02.870 }' 00:22:02.870 13:42:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:02.870 13:42:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:02.870 13:42:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:02.870 13:42:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:22:02.870 13:42:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:22:02.870 [2024-11-20 13:42:02.339964] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:22:02.870 [2024-11-20 13:42:02.340053] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:22:02.870 [2024-11-20 13:42:02.340129] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:03.807 13:42:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:22:03.807 13:42:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:03.807 13:42:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:03.807 13:42:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:03.807 13:42:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:03.807 13:42:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:03.807 13:42:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:03.807 13:42:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:03.807 13:42:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:03.807 13:42:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:04.067 13:42:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:04.067 13:42:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:04.067 "name": "raid_bdev1", 00:22:04.067 "uuid": "cbe2cc01-36df-4784-861d-6e351a824a8b", 00:22:04.067 "strip_size_kb": 64, 00:22:04.067 "state": "online", 00:22:04.067 "raid_level": "raid5f", 00:22:04.067 "superblock": false, 00:22:04.067 "num_base_bdevs": 4, 00:22:04.067 "num_base_bdevs_discovered": 4, 00:22:04.067 "num_base_bdevs_operational": 4, 00:22:04.067 "base_bdevs_list": [ 00:22:04.067 { 00:22:04.067 "name": "spare", 00:22:04.067 "uuid": "88ae6e3b-87d4-5c42-92b1-1814896cfac0", 00:22:04.067 "is_configured": true, 00:22:04.067 "data_offset": 0, 00:22:04.067 "data_size": 65536 00:22:04.067 }, 00:22:04.067 { 00:22:04.067 "name": "BaseBdev2", 00:22:04.067 "uuid": "18285108-c840-5d21-85d4-c7d64dfc2b8d", 00:22:04.067 "is_configured": true, 00:22:04.067 "data_offset": 0, 00:22:04.067 "data_size": 65536 00:22:04.067 }, 00:22:04.067 { 00:22:04.067 "name": "BaseBdev3", 00:22:04.067 "uuid": "505e1c40-f3f4-5f51-95a2-97fe9783129c", 00:22:04.067 "is_configured": true, 00:22:04.067 "data_offset": 0, 00:22:04.067 "data_size": 65536 00:22:04.067 }, 00:22:04.067 { 00:22:04.067 "name": "BaseBdev4", 00:22:04.067 "uuid": "852bc8a2-4acc-5ded-9b37-6b01b114bc07", 00:22:04.067 "is_configured": true, 00:22:04.067 "data_offset": 0, 00:22:04.067 "data_size": 65536 00:22:04.067 } 00:22:04.067 ] 00:22:04.067 }' 00:22:04.067 13:42:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:04.067 13:42:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:22:04.067 13:42:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:04.067 13:42:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:22:04.067 13:42:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:22:04.067 13:42:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:04.067 13:42:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:04.067 13:42:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:22:04.067 13:42:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:22:04.067 13:42:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:04.067 13:42:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:04.067 13:42:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:04.067 13:42:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:04.067 13:42:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:04.067 13:42:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:04.067 13:42:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:04.067 "name": "raid_bdev1", 00:22:04.068 "uuid": "cbe2cc01-36df-4784-861d-6e351a824a8b", 00:22:04.068 "strip_size_kb": 64, 00:22:04.068 "state": "online", 00:22:04.068 "raid_level": "raid5f", 00:22:04.068 "superblock": false, 00:22:04.068 "num_base_bdevs": 4, 00:22:04.068 "num_base_bdevs_discovered": 4, 00:22:04.068 "num_base_bdevs_operational": 4, 00:22:04.068 "base_bdevs_list": [ 00:22:04.068 { 00:22:04.068 "name": "spare", 00:22:04.068 "uuid": "88ae6e3b-87d4-5c42-92b1-1814896cfac0", 00:22:04.068 "is_configured": true, 00:22:04.068 "data_offset": 0, 00:22:04.068 "data_size": 65536 00:22:04.068 }, 00:22:04.068 { 00:22:04.068 "name": "BaseBdev2", 00:22:04.068 "uuid": "18285108-c840-5d21-85d4-c7d64dfc2b8d", 00:22:04.068 "is_configured": true, 00:22:04.068 "data_offset": 0, 00:22:04.068 "data_size": 65536 00:22:04.068 }, 00:22:04.068 { 00:22:04.068 "name": "BaseBdev3", 00:22:04.068 "uuid": "505e1c40-f3f4-5f51-95a2-97fe9783129c", 00:22:04.068 "is_configured": true, 00:22:04.068 "data_offset": 0, 00:22:04.068 "data_size": 65536 00:22:04.068 }, 00:22:04.068 { 00:22:04.068 "name": "BaseBdev4", 00:22:04.068 "uuid": "852bc8a2-4acc-5ded-9b37-6b01b114bc07", 00:22:04.068 "is_configured": true, 00:22:04.068 "data_offset": 0, 00:22:04.068 "data_size": 65536 00:22:04.068 } 00:22:04.068 ] 00:22:04.068 }' 00:22:04.068 13:42:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:04.068 13:42:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:22:04.068 13:42:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:04.068 13:42:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:22:04.068 13:42:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:22:04.068 13:42:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:04.068 13:42:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:04.068 13:42:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:04.068 13:42:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:04.068 13:42:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:04.068 13:42:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:04.068 13:42:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:04.068 13:42:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:04.068 13:42:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:04.068 13:42:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:04.068 13:42:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:04.068 13:42:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:04.068 13:42:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:04.068 13:42:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:04.327 13:42:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:04.327 "name": "raid_bdev1", 00:22:04.327 "uuid": "cbe2cc01-36df-4784-861d-6e351a824a8b", 00:22:04.327 "strip_size_kb": 64, 00:22:04.327 "state": "online", 00:22:04.327 "raid_level": "raid5f", 00:22:04.327 "superblock": false, 00:22:04.327 "num_base_bdevs": 4, 00:22:04.327 "num_base_bdevs_discovered": 4, 00:22:04.327 "num_base_bdevs_operational": 4, 00:22:04.327 "base_bdevs_list": [ 00:22:04.327 { 00:22:04.327 "name": "spare", 00:22:04.327 "uuid": "88ae6e3b-87d4-5c42-92b1-1814896cfac0", 00:22:04.327 "is_configured": true, 00:22:04.327 "data_offset": 0, 00:22:04.327 "data_size": 65536 00:22:04.327 }, 00:22:04.327 { 00:22:04.327 "name": "BaseBdev2", 00:22:04.327 "uuid": "18285108-c840-5d21-85d4-c7d64dfc2b8d", 00:22:04.327 "is_configured": true, 00:22:04.327 "data_offset": 0, 00:22:04.327 "data_size": 65536 00:22:04.327 }, 00:22:04.327 { 00:22:04.327 "name": "BaseBdev3", 00:22:04.327 "uuid": "505e1c40-f3f4-5f51-95a2-97fe9783129c", 00:22:04.327 "is_configured": true, 00:22:04.327 "data_offset": 0, 00:22:04.327 "data_size": 65536 00:22:04.327 }, 00:22:04.327 { 00:22:04.327 "name": "BaseBdev4", 00:22:04.327 "uuid": "852bc8a2-4acc-5ded-9b37-6b01b114bc07", 00:22:04.327 "is_configured": true, 00:22:04.327 "data_offset": 0, 00:22:04.327 "data_size": 65536 00:22:04.327 } 00:22:04.327 ] 00:22:04.327 }' 00:22:04.327 13:42:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:04.327 13:42:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:04.586 13:42:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:22:04.587 13:42:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:04.587 13:42:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:04.587 [2024-11-20 13:42:03.994429] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:04.587 [2024-11-20 13:42:03.994466] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:04.587 [2024-11-20 13:42:03.994557] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:04.587 [2024-11-20 13:42:03.994658] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:04.587 [2024-11-20 13:42:03.994671] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:22:04.587 13:42:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:04.587 13:42:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:04.587 13:42:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:04.587 13:42:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:22:04.587 13:42:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:04.587 13:42:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:04.587 13:42:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:22:04.587 13:42:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:22:04.587 13:42:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:22:04.587 13:42:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:22:04.587 13:42:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:22:04.587 13:42:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:22:04.587 13:42:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:22:04.587 13:42:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:22:04.587 13:42:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:22:04.587 13:42:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:22:04.587 13:42:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:22:04.587 13:42:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:22:04.587 13:42:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:22:04.846 /dev/nbd0 00:22:04.846 13:42:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:22:04.846 13:42:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:22:04.846 13:42:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:22:04.846 13:42:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:22:04.846 13:42:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:22:04.846 13:42:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:22:04.846 13:42:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:22:04.846 13:42:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:22:04.846 13:42:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:22:04.846 13:42:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:22:04.846 13:42:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:04.846 1+0 records in 00:22:04.846 1+0 records out 00:22:04.846 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000382784 s, 10.7 MB/s 00:22:04.846 13:42:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:04.846 13:42:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:22:04.846 13:42:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:04.846 13:42:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:22:04.846 13:42:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:22:04.846 13:42:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:04.846 13:42:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:22:04.846 13:42:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:22:05.105 /dev/nbd1 00:22:05.105 13:42:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:22:05.105 13:42:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:22:05.105 13:42:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:22:05.105 13:42:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:22:05.105 13:42:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:22:05.105 13:42:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:22:05.105 13:42:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:22:05.106 13:42:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:22:05.106 13:42:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:22:05.106 13:42:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:22:05.106 13:42:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:05.106 1+0 records in 00:22:05.106 1+0 records out 00:22:05.106 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0004311 s, 9.5 MB/s 00:22:05.106 13:42:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:05.106 13:42:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:22:05.106 13:42:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:05.106 13:42:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:22:05.106 13:42:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:22:05.106 13:42:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:05.106 13:42:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:22:05.106 13:42:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:22:05.365 13:42:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:22:05.365 13:42:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:22:05.365 13:42:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:22:05.365 13:42:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:22:05.365 13:42:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:22:05.365 13:42:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:05.365 13:42:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:22:05.623 13:42:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:22:05.623 13:42:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:22:05.623 13:42:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:22:05.623 13:42:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:05.623 13:42:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:05.623 13:42:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:22:05.623 13:42:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:22:05.623 13:42:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:22:05.623 13:42:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:05.623 13:42:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:22:05.883 13:42:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:22:05.883 13:42:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:22:05.883 13:42:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:22:05.883 13:42:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:05.883 13:42:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:05.883 13:42:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:22:05.883 13:42:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:22:05.883 13:42:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:22:05.883 13:42:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:22:05.883 13:42:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 84405 00:22:05.883 13:42:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 84405 ']' 00:22:05.883 13:42:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 84405 00:22:05.883 13:42:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:22:05.883 13:42:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:05.883 13:42:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84405 00:22:05.883 killing process with pid 84405 00:22:05.883 Received shutdown signal, test time was about 60.000000 seconds 00:22:05.883 00:22:05.883 Latency(us) 00:22:05.883 [2024-11-20T13:42:05.368Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:05.883 [2024-11-20T13:42:05.368Z] =================================================================================================================== 00:22:05.883 [2024-11-20T13:42:05.368Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:05.883 13:42:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:05.883 13:42:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:05.883 13:42:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84405' 00:22:05.883 13:42:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@973 -- # kill 84405 00:22:05.883 [2024-11-20 13:42:05.264777] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:05.883 13:42:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@978 -- # wait 84405 00:22:06.455 [2024-11-20 13:42:05.760786] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:07.832 ************************************ 00:22:07.832 END TEST raid5f_rebuild_test 00:22:07.832 ************************************ 00:22:07.832 13:42:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:22:07.832 00:22:07.832 real 0m19.901s 00:22:07.832 user 0m23.373s 00:22:07.832 sys 0m2.505s 00:22:07.832 13:42:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:07.832 13:42:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:07.832 13:42:06 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 4 true false true 00:22:07.832 13:42:06 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:22:07.832 13:42:06 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:07.832 13:42:06 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:22:07.832 ************************************ 00:22:07.832 START TEST raid5f_rebuild_test_sb 00:22:07.832 ************************************ 00:22:07.832 13:42:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 4 true false true 00:22:07.832 13:42:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:22:07.832 13:42:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:22:07.832 13:42:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:22:07.832 13:42:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:22:07.832 13:42:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:22:07.832 13:42:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:22:07.832 13:42:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:22:07.832 13:42:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:22:07.832 13:42:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:22:07.832 13:42:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:22:07.832 13:42:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:22:07.832 13:42:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:22:07.832 13:42:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:22:07.832 13:42:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:22:07.832 13:42:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:22:07.832 13:42:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:22:07.832 13:42:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:22:07.832 13:42:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:22:07.832 13:42:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:22:07.832 13:42:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:22:07.832 13:42:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:22:07.832 13:42:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:22:07.832 13:42:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:22:07.832 13:42:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:22:07.832 13:42:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:22:07.832 13:42:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:22:07.832 13:42:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:22:07.832 13:42:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:22:07.832 13:42:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:22:07.832 13:42:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:22:07.832 13:42:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:22:07.832 13:42:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:22:07.833 13:42:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=84926 00:22:07.833 13:42:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 84926 00:22:07.833 13:42:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 84926 ']' 00:22:07.833 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:07.833 13:42:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:22:07.833 13:42:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:07.833 13:42:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:07.833 13:42:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:07.833 13:42:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:07.833 13:42:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:07.833 I/O size of 3145728 is greater than zero copy threshold (65536). 00:22:07.833 Zero copy mechanism will not be used. 00:22:07.833 [2024-11-20 13:42:07.081201] Starting SPDK v25.01-pre git sha1 82b85d9ca / DPDK 24.03.0 initialization... 00:22:07.833 [2024-11-20 13:42:07.081334] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84926 ] 00:22:07.833 [2024-11-20 13:42:07.263195] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:08.092 [2024-11-20 13:42:07.385851] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:08.351 [2024-11-20 13:42:07.600981] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:08.351 [2024-11-20 13:42:07.601262] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:08.610 13:42:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:08.610 13:42:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:22:08.610 13:42:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:22:08.610 13:42:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:22:08.610 13:42:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.610 13:42:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:08.610 BaseBdev1_malloc 00:22:08.610 13:42:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.610 13:42:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:22:08.610 13:42:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.610 13:42:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:08.610 [2024-11-20 13:42:08.073144] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:22:08.610 [2024-11-20 13:42:08.073220] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:08.610 [2024-11-20 13:42:08.073247] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:22:08.610 [2024-11-20 13:42:08.073264] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:08.610 [2024-11-20 13:42:08.075859] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:08.610 [2024-11-20 13:42:08.075912] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:22:08.610 BaseBdev1 00:22:08.610 13:42:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.610 13:42:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:22:08.610 13:42:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:22:08.610 13:42:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.610 13:42:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:08.869 BaseBdev2_malloc 00:22:08.869 13:42:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.869 13:42:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:22:08.870 13:42:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.870 13:42:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:08.870 [2024-11-20 13:42:08.131132] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:22:08.870 [2024-11-20 13:42:08.131340] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:08.870 [2024-11-20 13:42:08.131377] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:22:08.870 [2024-11-20 13:42:08.131394] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:08.870 [2024-11-20 13:42:08.133982] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:08.870 [2024-11-20 13:42:08.134031] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:22:08.870 BaseBdev2 00:22:08.870 13:42:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.870 13:42:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:22:08.870 13:42:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:22:08.870 13:42:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.870 13:42:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:08.870 BaseBdev3_malloc 00:22:08.870 13:42:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.870 13:42:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:22:08.870 13:42:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.870 13:42:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:08.870 [2024-11-20 13:42:08.201808] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:22:08.870 [2024-11-20 13:42:08.202001] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:08.870 [2024-11-20 13:42:08.202039] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:22:08.870 [2024-11-20 13:42:08.202073] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:08.870 [2024-11-20 13:42:08.204647] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:08.870 [2024-11-20 13:42:08.204695] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:22:08.870 BaseBdev3 00:22:08.870 13:42:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.870 13:42:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:22:08.870 13:42:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:22:08.870 13:42:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.870 13:42:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:08.870 BaseBdev4_malloc 00:22:08.870 13:42:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.870 13:42:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:22:08.870 13:42:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.870 13:42:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:08.870 [2024-11-20 13:42:08.260027] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:22:08.870 [2024-11-20 13:42:08.260133] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:08.870 [2024-11-20 13:42:08.260161] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:22:08.870 [2024-11-20 13:42:08.260176] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:08.870 [2024-11-20 13:42:08.262761] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:08.870 [2024-11-20 13:42:08.262819] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:22:08.870 BaseBdev4 00:22:08.870 13:42:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.870 13:42:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:22:08.870 13:42:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.870 13:42:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:08.870 spare_malloc 00:22:08.870 13:42:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.870 13:42:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:22:08.870 13:42:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.870 13:42:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:08.870 spare_delay 00:22:08.870 13:42:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.870 13:42:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:22:08.870 13:42:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.870 13:42:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:08.870 [2024-11-20 13:42:08.332316] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:22:08.870 [2024-11-20 13:42:08.332392] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:08.870 [2024-11-20 13:42:08.332418] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:22:08.870 [2024-11-20 13:42:08.332433] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:08.870 [2024-11-20 13:42:08.335269] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:08.870 [2024-11-20 13:42:08.335320] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:22:08.870 spare 00:22:08.870 13:42:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.870 13:42:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:22:08.870 13:42:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.870 13:42:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:08.870 [2024-11-20 13:42:08.344349] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:08.870 [2024-11-20 13:42:08.346792] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:08.870 [2024-11-20 13:42:08.346992] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:08.870 [2024-11-20 13:42:08.347118] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:22:08.870 [2024-11-20 13:42:08.347429] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:22:08.870 [2024-11-20 13:42:08.347503] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:22:08.870 [2024-11-20 13:42:08.347900] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:22:09.130 [2024-11-20 13:42:08.355831] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:22:09.130 [2024-11-20 13:42:08.355976] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:22:09.130 [2024-11-20 13:42:08.356369] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:09.130 13:42:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.130 13:42:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:22:09.130 13:42:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:09.130 13:42:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:09.130 13:42:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:09.130 13:42:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:09.130 13:42:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:09.130 13:42:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:09.130 13:42:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:09.130 13:42:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:09.130 13:42:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:09.130 13:42:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:09.130 13:42:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:09.130 13:42:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.130 13:42:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:09.130 13:42:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.130 13:42:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:09.130 "name": "raid_bdev1", 00:22:09.130 "uuid": "8902efb1-b34f-4e7f-b5fe-f0c1fa2dc0b3", 00:22:09.130 "strip_size_kb": 64, 00:22:09.130 "state": "online", 00:22:09.130 "raid_level": "raid5f", 00:22:09.130 "superblock": true, 00:22:09.130 "num_base_bdevs": 4, 00:22:09.130 "num_base_bdevs_discovered": 4, 00:22:09.130 "num_base_bdevs_operational": 4, 00:22:09.130 "base_bdevs_list": [ 00:22:09.130 { 00:22:09.130 "name": "BaseBdev1", 00:22:09.130 "uuid": "ea86dd9b-7446-5ce7-af2a-bf4338dc2933", 00:22:09.130 "is_configured": true, 00:22:09.130 "data_offset": 2048, 00:22:09.130 "data_size": 63488 00:22:09.130 }, 00:22:09.130 { 00:22:09.130 "name": "BaseBdev2", 00:22:09.130 "uuid": "b70e4cd6-e1ab-5130-b5a5-78380ff7d316", 00:22:09.130 "is_configured": true, 00:22:09.130 "data_offset": 2048, 00:22:09.130 "data_size": 63488 00:22:09.130 }, 00:22:09.130 { 00:22:09.130 "name": "BaseBdev3", 00:22:09.130 "uuid": "5626b63f-c68c-597a-b879-b4b545c3e4fc", 00:22:09.130 "is_configured": true, 00:22:09.130 "data_offset": 2048, 00:22:09.130 "data_size": 63488 00:22:09.130 }, 00:22:09.130 { 00:22:09.130 "name": "BaseBdev4", 00:22:09.130 "uuid": "0eec3c60-e269-5fd8-9179-a892f1ede12a", 00:22:09.130 "is_configured": true, 00:22:09.130 "data_offset": 2048, 00:22:09.130 "data_size": 63488 00:22:09.130 } 00:22:09.130 ] 00:22:09.130 }' 00:22:09.130 13:42:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:09.130 13:42:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:09.389 13:42:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:22:09.389 13:42:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:22:09.389 13:42:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.389 13:42:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:09.390 [2024-11-20 13:42:08.813497] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:09.390 13:42:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.390 13:42:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=190464 00:22:09.390 13:42:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:09.390 13:42:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.390 13:42:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:22:09.390 13:42:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:09.649 13:42:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.649 13:42:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:22:09.649 13:42:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:22:09.649 13:42:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:22:09.649 13:42:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:22:09.649 13:42:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:22:09.649 13:42:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:22:09.649 13:42:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:22:09.649 13:42:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:22:09.649 13:42:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:22:09.649 13:42:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:22:09.649 13:42:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:22:09.649 13:42:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:22:09.649 13:42:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:09.649 13:42:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:22:09.649 [2024-11-20 13:42:09.105294] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:22:09.649 /dev/nbd0 00:22:09.913 13:42:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:22:09.913 13:42:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:22:09.913 13:42:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:22:09.913 13:42:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:22:09.913 13:42:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:22:09.913 13:42:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:22:09.913 13:42:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:22:09.913 13:42:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:22:09.913 13:42:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:22:09.913 13:42:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:22:09.913 13:42:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:09.913 1+0 records in 00:22:09.913 1+0 records out 00:22:09.913 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000333644 s, 12.3 MB/s 00:22:09.913 13:42:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:09.913 13:42:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:22:09.913 13:42:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:09.913 13:42:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:22:09.913 13:42:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:22:09.913 13:42:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:09.913 13:42:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:09.913 13:42:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:22:09.913 13:42:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 00:22:09.913 13:42:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 192 00:22:09.913 13:42:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=496 oflag=direct 00:22:10.562 496+0 records in 00:22:10.562 496+0 records out 00:22:10.562 97517568 bytes (98 MB, 93 MiB) copied, 0.691893 s, 141 MB/s 00:22:10.562 13:42:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:22:10.562 13:42:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:22:10.562 13:42:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:22:10.562 13:42:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:22:10.562 13:42:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:22:10.562 13:42:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:10.562 13:42:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:22:10.820 13:42:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:22:10.820 [2024-11-20 13:42:10.119744] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:10.820 13:42:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:22:10.820 13:42:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:22:10.820 13:42:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:10.820 13:42:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:10.820 13:42:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:22:10.820 13:42:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:22:10.820 13:42:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:22:10.820 13:42:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:22:10.820 13:42:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:10.820 13:42:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:10.820 [2024-11-20 13:42:10.142206] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:10.821 13:42:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:10.821 13:42:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:22:10.821 13:42:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:10.821 13:42:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:10.821 13:42:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:10.821 13:42:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:10.821 13:42:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:10.821 13:42:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:10.821 13:42:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:10.821 13:42:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:10.821 13:42:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:10.821 13:42:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:10.821 13:42:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:10.821 13:42:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:10.821 13:42:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:10.821 13:42:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:10.821 13:42:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:10.821 "name": "raid_bdev1", 00:22:10.821 "uuid": "8902efb1-b34f-4e7f-b5fe-f0c1fa2dc0b3", 00:22:10.821 "strip_size_kb": 64, 00:22:10.821 "state": "online", 00:22:10.821 "raid_level": "raid5f", 00:22:10.821 "superblock": true, 00:22:10.821 "num_base_bdevs": 4, 00:22:10.821 "num_base_bdevs_discovered": 3, 00:22:10.821 "num_base_bdevs_operational": 3, 00:22:10.821 "base_bdevs_list": [ 00:22:10.821 { 00:22:10.821 "name": null, 00:22:10.821 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:10.821 "is_configured": false, 00:22:10.821 "data_offset": 0, 00:22:10.821 "data_size": 63488 00:22:10.821 }, 00:22:10.821 { 00:22:10.821 "name": "BaseBdev2", 00:22:10.821 "uuid": "b70e4cd6-e1ab-5130-b5a5-78380ff7d316", 00:22:10.821 "is_configured": true, 00:22:10.821 "data_offset": 2048, 00:22:10.821 "data_size": 63488 00:22:10.821 }, 00:22:10.821 { 00:22:10.821 "name": "BaseBdev3", 00:22:10.821 "uuid": "5626b63f-c68c-597a-b879-b4b545c3e4fc", 00:22:10.821 "is_configured": true, 00:22:10.821 "data_offset": 2048, 00:22:10.821 "data_size": 63488 00:22:10.821 }, 00:22:10.821 { 00:22:10.821 "name": "BaseBdev4", 00:22:10.821 "uuid": "0eec3c60-e269-5fd8-9179-a892f1ede12a", 00:22:10.821 "is_configured": true, 00:22:10.821 "data_offset": 2048, 00:22:10.821 "data_size": 63488 00:22:10.821 } 00:22:10.821 ] 00:22:10.821 }' 00:22:10.821 13:42:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:10.821 13:42:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:11.389 13:42:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:22:11.389 13:42:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:11.389 13:42:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:11.389 [2024-11-20 13:42:10.625540] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:11.389 [2024-11-20 13:42:10.644258] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002aa50 00:22:11.389 13:42:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:11.389 13:42:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:22:11.389 [2024-11-20 13:42:10.656362] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:12.327 13:42:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:12.327 13:42:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:12.327 13:42:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:12.327 13:42:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:12.327 13:42:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:12.327 13:42:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:12.327 13:42:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:12.327 13:42:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:12.327 13:42:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:12.327 13:42:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:12.327 13:42:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:12.327 "name": "raid_bdev1", 00:22:12.327 "uuid": "8902efb1-b34f-4e7f-b5fe-f0c1fa2dc0b3", 00:22:12.327 "strip_size_kb": 64, 00:22:12.327 "state": "online", 00:22:12.327 "raid_level": "raid5f", 00:22:12.327 "superblock": true, 00:22:12.327 "num_base_bdevs": 4, 00:22:12.327 "num_base_bdevs_discovered": 4, 00:22:12.327 "num_base_bdevs_operational": 4, 00:22:12.327 "process": { 00:22:12.327 "type": "rebuild", 00:22:12.327 "target": "spare", 00:22:12.327 "progress": { 00:22:12.327 "blocks": 17280, 00:22:12.327 "percent": 9 00:22:12.327 } 00:22:12.327 }, 00:22:12.327 "base_bdevs_list": [ 00:22:12.327 { 00:22:12.327 "name": "spare", 00:22:12.327 "uuid": "0d8763e0-f475-54b4-a96d-2696aaa95e57", 00:22:12.327 "is_configured": true, 00:22:12.327 "data_offset": 2048, 00:22:12.327 "data_size": 63488 00:22:12.327 }, 00:22:12.327 { 00:22:12.327 "name": "BaseBdev2", 00:22:12.327 "uuid": "b70e4cd6-e1ab-5130-b5a5-78380ff7d316", 00:22:12.327 "is_configured": true, 00:22:12.327 "data_offset": 2048, 00:22:12.327 "data_size": 63488 00:22:12.327 }, 00:22:12.327 { 00:22:12.327 "name": "BaseBdev3", 00:22:12.327 "uuid": "5626b63f-c68c-597a-b879-b4b545c3e4fc", 00:22:12.327 "is_configured": true, 00:22:12.327 "data_offset": 2048, 00:22:12.327 "data_size": 63488 00:22:12.327 }, 00:22:12.327 { 00:22:12.327 "name": "BaseBdev4", 00:22:12.327 "uuid": "0eec3c60-e269-5fd8-9179-a892f1ede12a", 00:22:12.327 "is_configured": true, 00:22:12.327 "data_offset": 2048, 00:22:12.327 "data_size": 63488 00:22:12.327 } 00:22:12.327 ] 00:22:12.327 }' 00:22:12.327 13:42:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:12.327 13:42:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:12.327 13:42:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:12.327 13:42:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:22:12.327 13:42:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:22:12.327 13:42:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:12.328 13:42:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:12.328 [2024-11-20 13:42:11.788254] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:12.586 [2024-11-20 13:42:11.866168] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:22:12.586 [2024-11-20 13:42:11.866308] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:12.586 [2024-11-20 13:42:11.866338] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:12.586 [2024-11-20 13:42:11.866355] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:22:12.586 13:42:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:12.586 13:42:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:22:12.586 13:42:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:12.586 13:42:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:12.586 13:42:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:12.586 13:42:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:12.586 13:42:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:12.586 13:42:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:12.586 13:42:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:12.586 13:42:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:12.587 13:42:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:12.587 13:42:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:12.587 13:42:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:12.587 13:42:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:12.587 13:42:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:12.587 13:42:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:12.587 13:42:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:12.587 "name": "raid_bdev1", 00:22:12.587 "uuid": "8902efb1-b34f-4e7f-b5fe-f0c1fa2dc0b3", 00:22:12.587 "strip_size_kb": 64, 00:22:12.587 "state": "online", 00:22:12.587 "raid_level": "raid5f", 00:22:12.587 "superblock": true, 00:22:12.587 "num_base_bdevs": 4, 00:22:12.587 "num_base_bdevs_discovered": 3, 00:22:12.587 "num_base_bdevs_operational": 3, 00:22:12.587 "base_bdevs_list": [ 00:22:12.587 { 00:22:12.587 "name": null, 00:22:12.587 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:12.587 "is_configured": false, 00:22:12.587 "data_offset": 0, 00:22:12.587 "data_size": 63488 00:22:12.587 }, 00:22:12.587 { 00:22:12.587 "name": "BaseBdev2", 00:22:12.587 "uuid": "b70e4cd6-e1ab-5130-b5a5-78380ff7d316", 00:22:12.587 "is_configured": true, 00:22:12.587 "data_offset": 2048, 00:22:12.587 "data_size": 63488 00:22:12.587 }, 00:22:12.587 { 00:22:12.587 "name": "BaseBdev3", 00:22:12.587 "uuid": "5626b63f-c68c-597a-b879-b4b545c3e4fc", 00:22:12.587 "is_configured": true, 00:22:12.587 "data_offset": 2048, 00:22:12.587 "data_size": 63488 00:22:12.587 }, 00:22:12.587 { 00:22:12.587 "name": "BaseBdev4", 00:22:12.587 "uuid": "0eec3c60-e269-5fd8-9179-a892f1ede12a", 00:22:12.587 "is_configured": true, 00:22:12.587 "data_offset": 2048, 00:22:12.587 "data_size": 63488 00:22:12.587 } 00:22:12.587 ] 00:22:12.587 }' 00:22:12.587 13:42:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:12.587 13:42:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:13.155 13:42:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:13.155 13:42:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:13.155 13:42:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:22:13.155 13:42:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:22:13.155 13:42:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:13.155 13:42:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:13.155 13:42:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:13.155 13:42:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:13.155 13:42:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:13.155 13:42:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:13.155 13:42:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:13.155 "name": "raid_bdev1", 00:22:13.155 "uuid": "8902efb1-b34f-4e7f-b5fe-f0c1fa2dc0b3", 00:22:13.155 "strip_size_kb": 64, 00:22:13.155 "state": "online", 00:22:13.155 "raid_level": "raid5f", 00:22:13.155 "superblock": true, 00:22:13.155 "num_base_bdevs": 4, 00:22:13.155 "num_base_bdevs_discovered": 3, 00:22:13.155 "num_base_bdevs_operational": 3, 00:22:13.155 "base_bdevs_list": [ 00:22:13.155 { 00:22:13.155 "name": null, 00:22:13.155 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:13.155 "is_configured": false, 00:22:13.155 "data_offset": 0, 00:22:13.155 "data_size": 63488 00:22:13.155 }, 00:22:13.155 { 00:22:13.155 "name": "BaseBdev2", 00:22:13.155 "uuid": "b70e4cd6-e1ab-5130-b5a5-78380ff7d316", 00:22:13.155 "is_configured": true, 00:22:13.155 "data_offset": 2048, 00:22:13.155 "data_size": 63488 00:22:13.155 }, 00:22:13.155 { 00:22:13.155 "name": "BaseBdev3", 00:22:13.155 "uuid": "5626b63f-c68c-597a-b879-b4b545c3e4fc", 00:22:13.155 "is_configured": true, 00:22:13.155 "data_offset": 2048, 00:22:13.155 "data_size": 63488 00:22:13.155 }, 00:22:13.155 { 00:22:13.155 "name": "BaseBdev4", 00:22:13.155 "uuid": "0eec3c60-e269-5fd8-9179-a892f1ede12a", 00:22:13.155 "is_configured": true, 00:22:13.155 "data_offset": 2048, 00:22:13.155 "data_size": 63488 00:22:13.155 } 00:22:13.155 ] 00:22:13.155 }' 00:22:13.155 13:42:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:13.155 13:42:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:22:13.155 13:42:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:13.155 13:42:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:22:13.155 13:42:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:22:13.155 13:42:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:13.155 13:42:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:13.155 [2024-11-20 13:42:12.541231] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:13.155 [2024-11-20 13:42:12.559141] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002ab20 00:22:13.155 13:42:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:13.155 13:42:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:22:13.155 [2024-11-20 13:42:12.570422] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:14.094 13:42:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:14.094 13:42:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:14.094 13:42:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:14.094 13:42:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:14.094 13:42:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:14.094 13:42:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:14.094 13:42:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:14.094 13:42:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.094 13:42:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:14.354 13:42:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.354 13:42:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:14.354 "name": "raid_bdev1", 00:22:14.354 "uuid": "8902efb1-b34f-4e7f-b5fe-f0c1fa2dc0b3", 00:22:14.354 "strip_size_kb": 64, 00:22:14.354 "state": "online", 00:22:14.354 "raid_level": "raid5f", 00:22:14.354 "superblock": true, 00:22:14.354 "num_base_bdevs": 4, 00:22:14.354 "num_base_bdevs_discovered": 4, 00:22:14.354 "num_base_bdevs_operational": 4, 00:22:14.354 "process": { 00:22:14.354 "type": "rebuild", 00:22:14.354 "target": "spare", 00:22:14.354 "progress": { 00:22:14.354 "blocks": 17280, 00:22:14.354 "percent": 9 00:22:14.354 } 00:22:14.354 }, 00:22:14.354 "base_bdevs_list": [ 00:22:14.354 { 00:22:14.354 "name": "spare", 00:22:14.354 "uuid": "0d8763e0-f475-54b4-a96d-2696aaa95e57", 00:22:14.354 "is_configured": true, 00:22:14.354 "data_offset": 2048, 00:22:14.354 "data_size": 63488 00:22:14.354 }, 00:22:14.354 { 00:22:14.354 "name": "BaseBdev2", 00:22:14.354 "uuid": "b70e4cd6-e1ab-5130-b5a5-78380ff7d316", 00:22:14.354 "is_configured": true, 00:22:14.354 "data_offset": 2048, 00:22:14.354 "data_size": 63488 00:22:14.354 }, 00:22:14.354 { 00:22:14.354 "name": "BaseBdev3", 00:22:14.354 "uuid": "5626b63f-c68c-597a-b879-b4b545c3e4fc", 00:22:14.354 "is_configured": true, 00:22:14.354 "data_offset": 2048, 00:22:14.355 "data_size": 63488 00:22:14.355 }, 00:22:14.355 { 00:22:14.355 "name": "BaseBdev4", 00:22:14.355 "uuid": "0eec3c60-e269-5fd8-9179-a892f1ede12a", 00:22:14.355 "is_configured": true, 00:22:14.355 "data_offset": 2048, 00:22:14.355 "data_size": 63488 00:22:14.355 } 00:22:14.355 ] 00:22:14.355 }' 00:22:14.355 13:42:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:14.355 13:42:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:14.355 13:42:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:14.355 13:42:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:22:14.355 13:42:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:22:14.355 13:42:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:22:14.355 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:22:14.355 13:42:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:22:14.355 13:42:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:22:14.355 13:42:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=641 00:22:14.355 13:42:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:22:14.355 13:42:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:14.355 13:42:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:14.355 13:42:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:14.355 13:42:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:14.355 13:42:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:14.355 13:42:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:14.355 13:42:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.355 13:42:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:14.355 13:42:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:14.355 13:42:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.355 13:42:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:14.355 "name": "raid_bdev1", 00:22:14.355 "uuid": "8902efb1-b34f-4e7f-b5fe-f0c1fa2dc0b3", 00:22:14.355 "strip_size_kb": 64, 00:22:14.355 "state": "online", 00:22:14.355 "raid_level": "raid5f", 00:22:14.355 "superblock": true, 00:22:14.355 "num_base_bdevs": 4, 00:22:14.355 "num_base_bdevs_discovered": 4, 00:22:14.355 "num_base_bdevs_operational": 4, 00:22:14.355 "process": { 00:22:14.355 "type": "rebuild", 00:22:14.355 "target": "spare", 00:22:14.355 "progress": { 00:22:14.355 "blocks": 21120, 00:22:14.355 "percent": 11 00:22:14.355 } 00:22:14.355 }, 00:22:14.355 "base_bdevs_list": [ 00:22:14.355 { 00:22:14.355 "name": "spare", 00:22:14.355 "uuid": "0d8763e0-f475-54b4-a96d-2696aaa95e57", 00:22:14.355 "is_configured": true, 00:22:14.355 "data_offset": 2048, 00:22:14.355 "data_size": 63488 00:22:14.355 }, 00:22:14.355 { 00:22:14.355 "name": "BaseBdev2", 00:22:14.355 "uuid": "b70e4cd6-e1ab-5130-b5a5-78380ff7d316", 00:22:14.355 "is_configured": true, 00:22:14.355 "data_offset": 2048, 00:22:14.355 "data_size": 63488 00:22:14.355 }, 00:22:14.355 { 00:22:14.355 "name": "BaseBdev3", 00:22:14.355 "uuid": "5626b63f-c68c-597a-b879-b4b545c3e4fc", 00:22:14.355 "is_configured": true, 00:22:14.355 "data_offset": 2048, 00:22:14.355 "data_size": 63488 00:22:14.355 }, 00:22:14.355 { 00:22:14.355 "name": "BaseBdev4", 00:22:14.355 "uuid": "0eec3c60-e269-5fd8-9179-a892f1ede12a", 00:22:14.355 "is_configured": true, 00:22:14.355 "data_offset": 2048, 00:22:14.355 "data_size": 63488 00:22:14.355 } 00:22:14.355 ] 00:22:14.355 }' 00:22:14.355 13:42:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:14.355 13:42:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:14.355 13:42:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:14.355 13:42:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:22:14.355 13:42:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:22:15.329 13:42:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:22:15.329 13:42:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:15.329 13:42:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:15.329 13:42:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:15.329 13:42:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:15.329 13:42:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:15.329 13:42:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:15.329 13:42:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:15.329 13:42:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:15.329 13:42:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:15.589 13:42:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:15.589 13:42:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:15.589 "name": "raid_bdev1", 00:22:15.589 "uuid": "8902efb1-b34f-4e7f-b5fe-f0c1fa2dc0b3", 00:22:15.589 "strip_size_kb": 64, 00:22:15.589 "state": "online", 00:22:15.589 "raid_level": "raid5f", 00:22:15.589 "superblock": true, 00:22:15.589 "num_base_bdevs": 4, 00:22:15.589 "num_base_bdevs_discovered": 4, 00:22:15.589 "num_base_bdevs_operational": 4, 00:22:15.589 "process": { 00:22:15.589 "type": "rebuild", 00:22:15.589 "target": "spare", 00:22:15.589 "progress": { 00:22:15.589 "blocks": 42240, 00:22:15.589 "percent": 22 00:22:15.589 } 00:22:15.589 }, 00:22:15.589 "base_bdevs_list": [ 00:22:15.589 { 00:22:15.589 "name": "spare", 00:22:15.589 "uuid": "0d8763e0-f475-54b4-a96d-2696aaa95e57", 00:22:15.589 "is_configured": true, 00:22:15.589 "data_offset": 2048, 00:22:15.589 "data_size": 63488 00:22:15.589 }, 00:22:15.589 { 00:22:15.589 "name": "BaseBdev2", 00:22:15.589 "uuid": "b70e4cd6-e1ab-5130-b5a5-78380ff7d316", 00:22:15.589 "is_configured": true, 00:22:15.589 "data_offset": 2048, 00:22:15.589 "data_size": 63488 00:22:15.589 }, 00:22:15.589 { 00:22:15.589 "name": "BaseBdev3", 00:22:15.589 "uuid": "5626b63f-c68c-597a-b879-b4b545c3e4fc", 00:22:15.589 "is_configured": true, 00:22:15.589 "data_offset": 2048, 00:22:15.589 "data_size": 63488 00:22:15.589 }, 00:22:15.589 { 00:22:15.589 "name": "BaseBdev4", 00:22:15.590 "uuid": "0eec3c60-e269-5fd8-9179-a892f1ede12a", 00:22:15.590 "is_configured": true, 00:22:15.590 "data_offset": 2048, 00:22:15.590 "data_size": 63488 00:22:15.590 } 00:22:15.590 ] 00:22:15.590 }' 00:22:15.590 13:42:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:15.590 13:42:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:15.590 13:42:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:15.590 13:42:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:22:15.590 13:42:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:22:16.526 13:42:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:22:16.526 13:42:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:16.526 13:42:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:16.526 13:42:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:16.526 13:42:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:16.526 13:42:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:16.526 13:42:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:16.526 13:42:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:16.526 13:42:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:16.526 13:42:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:16.526 13:42:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:16.526 13:42:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:16.526 "name": "raid_bdev1", 00:22:16.526 "uuid": "8902efb1-b34f-4e7f-b5fe-f0c1fa2dc0b3", 00:22:16.526 "strip_size_kb": 64, 00:22:16.526 "state": "online", 00:22:16.526 "raid_level": "raid5f", 00:22:16.526 "superblock": true, 00:22:16.526 "num_base_bdevs": 4, 00:22:16.526 "num_base_bdevs_discovered": 4, 00:22:16.526 "num_base_bdevs_operational": 4, 00:22:16.526 "process": { 00:22:16.526 "type": "rebuild", 00:22:16.526 "target": "spare", 00:22:16.526 "progress": { 00:22:16.526 "blocks": 63360, 00:22:16.526 "percent": 33 00:22:16.526 } 00:22:16.526 }, 00:22:16.526 "base_bdevs_list": [ 00:22:16.526 { 00:22:16.526 "name": "spare", 00:22:16.526 "uuid": "0d8763e0-f475-54b4-a96d-2696aaa95e57", 00:22:16.526 "is_configured": true, 00:22:16.526 "data_offset": 2048, 00:22:16.526 "data_size": 63488 00:22:16.526 }, 00:22:16.526 { 00:22:16.526 "name": "BaseBdev2", 00:22:16.526 "uuid": "b70e4cd6-e1ab-5130-b5a5-78380ff7d316", 00:22:16.526 "is_configured": true, 00:22:16.526 "data_offset": 2048, 00:22:16.526 "data_size": 63488 00:22:16.526 }, 00:22:16.526 { 00:22:16.526 "name": "BaseBdev3", 00:22:16.526 "uuid": "5626b63f-c68c-597a-b879-b4b545c3e4fc", 00:22:16.526 "is_configured": true, 00:22:16.526 "data_offset": 2048, 00:22:16.526 "data_size": 63488 00:22:16.526 }, 00:22:16.526 { 00:22:16.526 "name": "BaseBdev4", 00:22:16.526 "uuid": "0eec3c60-e269-5fd8-9179-a892f1ede12a", 00:22:16.526 "is_configured": true, 00:22:16.526 "data_offset": 2048, 00:22:16.526 "data_size": 63488 00:22:16.526 } 00:22:16.526 ] 00:22:16.526 }' 00:22:16.526 13:42:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:16.786 13:42:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:16.786 13:42:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:16.786 13:42:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:22:16.786 13:42:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:22:17.722 13:42:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:22:17.722 13:42:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:17.722 13:42:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:17.722 13:42:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:17.722 13:42:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:17.722 13:42:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:17.723 13:42:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:17.723 13:42:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:17.723 13:42:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:17.723 13:42:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:17.723 13:42:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:17.723 13:42:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:17.723 "name": "raid_bdev1", 00:22:17.723 "uuid": "8902efb1-b34f-4e7f-b5fe-f0c1fa2dc0b3", 00:22:17.723 "strip_size_kb": 64, 00:22:17.723 "state": "online", 00:22:17.723 "raid_level": "raid5f", 00:22:17.723 "superblock": true, 00:22:17.723 "num_base_bdevs": 4, 00:22:17.723 "num_base_bdevs_discovered": 4, 00:22:17.723 "num_base_bdevs_operational": 4, 00:22:17.723 "process": { 00:22:17.723 "type": "rebuild", 00:22:17.723 "target": "spare", 00:22:17.723 "progress": { 00:22:17.723 "blocks": 84480, 00:22:17.723 "percent": 44 00:22:17.723 } 00:22:17.723 }, 00:22:17.723 "base_bdevs_list": [ 00:22:17.723 { 00:22:17.723 "name": "spare", 00:22:17.723 "uuid": "0d8763e0-f475-54b4-a96d-2696aaa95e57", 00:22:17.723 "is_configured": true, 00:22:17.723 "data_offset": 2048, 00:22:17.723 "data_size": 63488 00:22:17.723 }, 00:22:17.723 { 00:22:17.723 "name": "BaseBdev2", 00:22:17.723 "uuid": "b70e4cd6-e1ab-5130-b5a5-78380ff7d316", 00:22:17.723 "is_configured": true, 00:22:17.723 "data_offset": 2048, 00:22:17.723 "data_size": 63488 00:22:17.723 }, 00:22:17.723 { 00:22:17.723 "name": "BaseBdev3", 00:22:17.723 "uuid": "5626b63f-c68c-597a-b879-b4b545c3e4fc", 00:22:17.723 "is_configured": true, 00:22:17.723 "data_offset": 2048, 00:22:17.723 "data_size": 63488 00:22:17.723 }, 00:22:17.723 { 00:22:17.723 "name": "BaseBdev4", 00:22:17.723 "uuid": "0eec3c60-e269-5fd8-9179-a892f1ede12a", 00:22:17.723 "is_configured": true, 00:22:17.723 "data_offset": 2048, 00:22:17.723 "data_size": 63488 00:22:17.723 } 00:22:17.723 ] 00:22:17.723 }' 00:22:17.723 13:42:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:17.723 13:42:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:17.723 13:42:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:17.981 13:42:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:22:17.981 13:42:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:22:18.915 13:42:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:22:18.915 13:42:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:18.915 13:42:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:18.915 13:42:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:18.915 13:42:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:18.915 13:42:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:18.915 13:42:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:18.915 13:42:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:18.915 13:42:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:18.915 13:42:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:18.915 13:42:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:18.915 13:42:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:18.915 "name": "raid_bdev1", 00:22:18.915 "uuid": "8902efb1-b34f-4e7f-b5fe-f0c1fa2dc0b3", 00:22:18.915 "strip_size_kb": 64, 00:22:18.915 "state": "online", 00:22:18.915 "raid_level": "raid5f", 00:22:18.915 "superblock": true, 00:22:18.915 "num_base_bdevs": 4, 00:22:18.915 "num_base_bdevs_discovered": 4, 00:22:18.915 "num_base_bdevs_operational": 4, 00:22:18.915 "process": { 00:22:18.915 "type": "rebuild", 00:22:18.915 "target": "spare", 00:22:18.915 "progress": { 00:22:18.915 "blocks": 107520, 00:22:18.915 "percent": 56 00:22:18.915 } 00:22:18.915 }, 00:22:18.915 "base_bdevs_list": [ 00:22:18.915 { 00:22:18.915 "name": "spare", 00:22:18.915 "uuid": "0d8763e0-f475-54b4-a96d-2696aaa95e57", 00:22:18.915 "is_configured": true, 00:22:18.915 "data_offset": 2048, 00:22:18.915 "data_size": 63488 00:22:18.915 }, 00:22:18.915 { 00:22:18.915 "name": "BaseBdev2", 00:22:18.915 "uuid": "b70e4cd6-e1ab-5130-b5a5-78380ff7d316", 00:22:18.915 "is_configured": true, 00:22:18.915 "data_offset": 2048, 00:22:18.915 "data_size": 63488 00:22:18.915 }, 00:22:18.915 { 00:22:18.915 "name": "BaseBdev3", 00:22:18.915 "uuid": "5626b63f-c68c-597a-b879-b4b545c3e4fc", 00:22:18.915 "is_configured": true, 00:22:18.915 "data_offset": 2048, 00:22:18.915 "data_size": 63488 00:22:18.915 }, 00:22:18.915 { 00:22:18.915 "name": "BaseBdev4", 00:22:18.915 "uuid": "0eec3c60-e269-5fd8-9179-a892f1ede12a", 00:22:18.915 "is_configured": true, 00:22:18.915 "data_offset": 2048, 00:22:18.915 "data_size": 63488 00:22:18.915 } 00:22:18.915 ] 00:22:18.915 }' 00:22:18.915 13:42:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:18.915 13:42:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:18.915 13:42:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:18.915 13:42:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:22:18.915 13:42:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:22:19.888 13:42:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:22:19.888 13:42:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:19.888 13:42:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:19.888 13:42:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:19.888 13:42:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:19.888 13:42:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:19.888 13:42:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:19.888 13:42:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:19.888 13:42:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:19.888 13:42:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:20.144 13:42:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:20.144 13:42:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:20.144 "name": "raid_bdev1", 00:22:20.144 "uuid": "8902efb1-b34f-4e7f-b5fe-f0c1fa2dc0b3", 00:22:20.144 "strip_size_kb": 64, 00:22:20.144 "state": "online", 00:22:20.144 "raid_level": "raid5f", 00:22:20.144 "superblock": true, 00:22:20.144 "num_base_bdevs": 4, 00:22:20.144 "num_base_bdevs_discovered": 4, 00:22:20.144 "num_base_bdevs_operational": 4, 00:22:20.144 "process": { 00:22:20.144 "type": "rebuild", 00:22:20.144 "target": "spare", 00:22:20.144 "progress": { 00:22:20.144 "blocks": 128640, 00:22:20.144 "percent": 67 00:22:20.144 } 00:22:20.144 }, 00:22:20.144 "base_bdevs_list": [ 00:22:20.144 { 00:22:20.144 "name": "spare", 00:22:20.144 "uuid": "0d8763e0-f475-54b4-a96d-2696aaa95e57", 00:22:20.144 "is_configured": true, 00:22:20.144 "data_offset": 2048, 00:22:20.144 "data_size": 63488 00:22:20.144 }, 00:22:20.144 { 00:22:20.144 "name": "BaseBdev2", 00:22:20.144 "uuid": "b70e4cd6-e1ab-5130-b5a5-78380ff7d316", 00:22:20.144 "is_configured": true, 00:22:20.144 "data_offset": 2048, 00:22:20.144 "data_size": 63488 00:22:20.144 }, 00:22:20.144 { 00:22:20.144 "name": "BaseBdev3", 00:22:20.144 "uuid": "5626b63f-c68c-597a-b879-b4b545c3e4fc", 00:22:20.144 "is_configured": true, 00:22:20.144 "data_offset": 2048, 00:22:20.144 "data_size": 63488 00:22:20.144 }, 00:22:20.144 { 00:22:20.144 "name": "BaseBdev4", 00:22:20.144 "uuid": "0eec3c60-e269-5fd8-9179-a892f1ede12a", 00:22:20.144 "is_configured": true, 00:22:20.144 "data_offset": 2048, 00:22:20.144 "data_size": 63488 00:22:20.144 } 00:22:20.144 ] 00:22:20.144 }' 00:22:20.144 13:42:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:20.144 13:42:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:20.144 13:42:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:20.144 13:42:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:22:20.144 13:42:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:22:21.080 13:42:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:22:21.080 13:42:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:21.080 13:42:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:21.080 13:42:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:21.080 13:42:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:21.080 13:42:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:21.080 13:42:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:21.080 13:42:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:21.080 13:42:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:21.080 13:42:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:21.080 13:42:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:21.080 13:42:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:21.080 "name": "raid_bdev1", 00:22:21.080 "uuid": "8902efb1-b34f-4e7f-b5fe-f0c1fa2dc0b3", 00:22:21.080 "strip_size_kb": 64, 00:22:21.080 "state": "online", 00:22:21.080 "raid_level": "raid5f", 00:22:21.080 "superblock": true, 00:22:21.080 "num_base_bdevs": 4, 00:22:21.080 "num_base_bdevs_discovered": 4, 00:22:21.080 "num_base_bdevs_operational": 4, 00:22:21.080 "process": { 00:22:21.080 "type": "rebuild", 00:22:21.080 "target": "spare", 00:22:21.080 "progress": { 00:22:21.080 "blocks": 149760, 00:22:21.080 "percent": 78 00:22:21.080 } 00:22:21.080 }, 00:22:21.080 "base_bdevs_list": [ 00:22:21.080 { 00:22:21.080 "name": "spare", 00:22:21.080 "uuid": "0d8763e0-f475-54b4-a96d-2696aaa95e57", 00:22:21.080 "is_configured": true, 00:22:21.080 "data_offset": 2048, 00:22:21.080 "data_size": 63488 00:22:21.080 }, 00:22:21.080 { 00:22:21.080 "name": "BaseBdev2", 00:22:21.080 "uuid": "b70e4cd6-e1ab-5130-b5a5-78380ff7d316", 00:22:21.080 "is_configured": true, 00:22:21.080 "data_offset": 2048, 00:22:21.080 "data_size": 63488 00:22:21.080 }, 00:22:21.080 { 00:22:21.080 "name": "BaseBdev3", 00:22:21.080 "uuid": "5626b63f-c68c-597a-b879-b4b545c3e4fc", 00:22:21.080 "is_configured": true, 00:22:21.080 "data_offset": 2048, 00:22:21.080 "data_size": 63488 00:22:21.080 }, 00:22:21.080 { 00:22:21.080 "name": "BaseBdev4", 00:22:21.080 "uuid": "0eec3c60-e269-5fd8-9179-a892f1ede12a", 00:22:21.080 "is_configured": true, 00:22:21.080 "data_offset": 2048, 00:22:21.080 "data_size": 63488 00:22:21.080 } 00:22:21.080 ] 00:22:21.080 }' 00:22:21.080 13:42:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:21.339 13:42:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:21.339 13:42:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:21.339 13:42:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:22:21.339 13:42:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:22:22.278 13:42:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:22:22.278 13:42:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:22.278 13:42:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:22.278 13:42:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:22.278 13:42:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:22.278 13:42:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:22.278 13:42:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:22.278 13:42:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:22.278 13:42:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:22.278 13:42:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:22.278 13:42:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:22.278 13:42:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:22.278 "name": "raid_bdev1", 00:22:22.278 "uuid": "8902efb1-b34f-4e7f-b5fe-f0c1fa2dc0b3", 00:22:22.278 "strip_size_kb": 64, 00:22:22.278 "state": "online", 00:22:22.278 "raid_level": "raid5f", 00:22:22.278 "superblock": true, 00:22:22.278 "num_base_bdevs": 4, 00:22:22.278 "num_base_bdevs_discovered": 4, 00:22:22.278 "num_base_bdevs_operational": 4, 00:22:22.278 "process": { 00:22:22.278 "type": "rebuild", 00:22:22.278 "target": "spare", 00:22:22.278 "progress": { 00:22:22.278 "blocks": 172800, 00:22:22.278 "percent": 90 00:22:22.278 } 00:22:22.278 }, 00:22:22.278 "base_bdevs_list": [ 00:22:22.278 { 00:22:22.278 "name": "spare", 00:22:22.278 "uuid": "0d8763e0-f475-54b4-a96d-2696aaa95e57", 00:22:22.278 "is_configured": true, 00:22:22.278 "data_offset": 2048, 00:22:22.278 "data_size": 63488 00:22:22.278 }, 00:22:22.278 { 00:22:22.278 "name": "BaseBdev2", 00:22:22.278 "uuid": "b70e4cd6-e1ab-5130-b5a5-78380ff7d316", 00:22:22.278 "is_configured": true, 00:22:22.278 "data_offset": 2048, 00:22:22.278 "data_size": 63488 00:22:22.278 }, 00:22:22.278 { 00:22:22.278 "name": "BaseBdev3", 00:22:22.278 "uuid": "5626b63f-c68c-597a-b879-b4b545c3e4fc", 00:22:22.278 "is_configured": true, 00:22:22.278 "data_offset": 2048, 00:22:22.278 "data_size": 63488 00:22:22.278 }, 00:22:22.278 { 00:22:22.278 "name": "BaseBdev4", 00:22:22.278 "uuid": "0eec3c60-e269-5fd8-9179-a892f1ede12a", 00:22:22.278 "is_configured": true, 00:22:22.278 "data_offset": 2048, 00:22:22.278 "data_size": 63488 00:22:22.278 } 00:22:22.278 ] 00:22:22.278 }' 00:22:22.278 13:42:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:22.537 13:42:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:22.537 13:42:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:22.537 13:42:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:22:22.537 13:42:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:22:23.474 [2024-11-20 13:42:22.644921] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:22:23.474 [2024-11-20 13:42:22.645260] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:22:23.474 [2024-11-20 13:42:22.645451] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:23.474 13:42:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:22:23.474 13:42:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:23.474 13:42:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:23.474 13:42:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:23.474 13:42:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:23.474 13:42:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:23.474 13:42:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:23.474 13:42:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:23.474 13:42:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:23.474 13:42:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:23.474 13:42:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:23.474 13:42:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:23.474 "name": "raid_bdev1", 00:22:23.474 "uuid": "8902efb1-b34f-4e7f-b5fe-f0c1fa2dc0b3", 00:22:23.474 "strip_size_kb": 64, 00:22:23.474 "state": "online", 00:22:23.474 "raid_level": "raid5f", 00:22:23.474 "superblock": true, 00:22:23.474 "num_base_bdevs": 4, 00:22:23.474 "num_base_bdevs_discovered": 4, 00:22:23.474 "num_base_bdevs_operational": 4, 00:22:23.474 "base_bdevs_list": [ 00:22:23.474 { 00:22:23.474 "name": "spare", 00:22:23.474 "uuid": "0d8763e0-f475-54b4-a96d-2696aaa95e57", 00:22:23.474 "is_configured": true, 00:22:23.474 "data_offset": 2048, 00:22:23.474 "data_size": 63488 00:22:23.474 }, 00:22:23.474 { 00:22:23.474 "name": "BaseBdev2", 00:22:23.474 "uuid": "b70e4cd6-e1ab-5130-b5a5-78380ff7d316", 00:22:23.474 "is_configured": true, 00:22:23.474 "data_offset": 2048, 00:22:23.474 "data_size": 63488 00:22:23.474 }, 00:22:23.474 { 00:22:23.474 "name": "BaseBdev3", 00:22:23.474 "uuid": "5626b63f-c68c-597a-b879-b4b545c3e4fc", 00:22:23.474 "is_configured": true, 00:22:23.474 "data_offset": 2048, 00:22:23.474 "data_size": 63488 00:22:23.474 }, 00:22:23.474 { 00:22:23.474 "name": "BaseBdev4", 00:22:23.474 "uuid": "0eec3c60-e269-5fd8-9179-a892f1ede12a", 00:22:23.474 "is_configured": true, 00:22:23.474 "data_offset": 2048, 00:22:23.474 "data_size": 63488 00:22:23.474 } 00:22:23.474 ] 00:22:23.474 }' 00:22:23.474 13:42:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:23.474 13:42:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:22:23.474 13:42:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:23.474 13:42:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:22:23.474 13:42:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:22:23.474 13:42:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:23.474 13:42:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:23.474 13:42:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:22:23.474 13:42:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:22:23.474 13:42:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:23.474 13:42:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:23.474 13:42:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:23.474 13:42:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:23.474 13:42:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:23.733 13:42:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:23.733 13:42:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:23.733 "name": "raid_bdev1", 00:22:23.733 "uuid": "8902efb1-b34f-4e7f-b5fe-f0c1fa2dc0b3", 00:22:23.733 "strip_size_kb": 64, 00:22:23.733 "state": "online", 00:22:23.733 "raid_level": "raid5f", 00:22:23.733 "superblock": true, 00:22:23.733 "num_base_bdevs": 4, 00:22:23.733 "num_base_bdevs_discovered": 4, 00:22:23.733 "num_base_bdevs_operational": 4, 00:22:23.733 "base_bdevs_list": [ 00:22:23.733 { 00:22:23.733 "name": "spare", 00:22:23.733 "uuid": "0d8763e0-f475-54b4-a96d-2696aaa95e57", 00:22:23.733 "is_configured": true, 00:22:23.733 "data_offset": 2048, 00:22:23.733 "data_size": 63488 00:22:23.733 }, 00:22:23.733 { 00:22:23.733 "name": "BaseBdev2", 00:22:23.733 "uuid": "b70e4cd6-e1ab-5130-b5a5-78380ff7d316", 00:22:23.733 "is_configured": true, 00:22:23.733 "data_offset": 2048, 00:22:23.733 "data_size": 63488 00:22:23.733 }, 00:22:23.733 { 00:22:23.733 "name": "BaseBdev3", 00:22:23.733 "uuid": "5626b63f-c68c-597a-b879-b4b545c3e4fc", 00:22:23.733 "is_configured": true, 00:22:23.733 "data_offset": 2048, 00:22:23.733 "data_size": 63488 00:22:23.733 }, 00:22:23.733 { 00:22:23.734 "name": "BaseBdev4", 00:22:23.734 "uuid": "0eec3c60-e269-5fd8-9179-a892f1ede12a", 00:22:23.734 "is_configured": true, 00:22:23.734 "data_offset": 2048, 00:22:23.734 "data_size": 63488 00:22:23.734 } 00:22:23.734 ] 00:22:23.734 }' 00:22:23.734 13:42:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:23.734 13:42:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:22:23.734 13:42:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:23.734 13:42:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:22:23.734 13:42:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:22:23.734 13:42:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:23.734 13:42:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:23.734 13:42:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:23.734 13:42:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:23.734 13:42:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:23.734 13:42:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:23.734 13:42:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:23.734 13:42:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:23.734 13:42:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:23.734 13:42:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:23.734 13:42:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:23.734 13:42:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:23.734 13:42:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:23.734 13:42:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:23.734 13:42:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:23.734 "name": "raid_bdev1", 00:22:23.734 "uuid": "8902efb1-b34f-4e7f-b5fe-f0c1fa2dc0b3", 00:22:23.734 "strip_size_kb": 64, 00:22:23.734 "state": "online", 00:22:23.734 "raid_level": "raid5f", 00:22:23.734 "superblock": true, 00:22:23.734 "num_base_bdevs": 4, 00:22:23.734 "num_base_bdevs_discovered": 4, 00:22:23.734 "num_base_bdevs_operational": 4, 00:22:23.734 "base_bdevs_list": [ 00:22:23.734 { 00:22:23.734 "name": "spare", 00:22:23.734 "uuid": "0d8763e0-f475-54b4-a96d-2696aaa95e57", 00:22:23.734 "is_configured": true, 00:22:23.734 "data_offset": 2048, 00:22:23.734 "data_size": 63488 00:22:23.734 }, 00:22:23.734 { 00:22:23.734 "name": "BaseBdev2", 00:22:23.734 "uuid": "b70e4cd6-e1ab-5130-b5a5-78380ff7d316", 00:22:23.734 "is_configured": true, 00:22:23.734 "data_offset": 2048, 00:22:23.734 "data_size": 63488 00:22:23.734 }, 00:22:23.734 { 00:22:23.734 "name": "BaseBdev3", 00:22:23.734 "uuid": "5626b63f-c68c-597a-b879-b4b545c3e4fc", 00:22:23.734 "is_configured": true, 00:22:23.734 "data_offset": 2048, 00:22:23.734 "data_size": 63488 00:22:23.734 }, 00:22:23.734 { 00:22:23.734 "name": "BaseBdev4", 00:22:23.734 "uuid": "0eec3c60-e269-5fd8-9179-a892f1ede12a", 00:22:23.734 "is_configured": true, 00:22:23.734 "data_offset": 2048, 00:22:23.734 "data_size": 63488 00:22:23.734 } 00:22:23.734 ] 00:22:23.734 }' 00:22:23.734 13:42:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:23.734 13:42:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:24.302 13:42:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:22:24.302 13:42:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:24.302 13:42:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:24.302 [2024-11-20 13:42:23.523416] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:24.302 [2024-11-20 13:42:23.523609] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:24.302 [2024-11-20 13:42:23.523737] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:24.302 [2024-11-20 13:42:23.523847] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:24.302 [2024-11-20 13:42:23.523876] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:22:24.302 13:42:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:24.302 13:42:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:24.302 13:42:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:22:24.302 13:42:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:24.302 13:42:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:24.302 13:42:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:24.302 13:42:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:22:24.302 13:42:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:22:24.302 13:42:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:22:24.302 13:42:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:22:24.302 13:42:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:22:24.302 13:42:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:22:24.302 13:42:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:22:24.302 13:42:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:22:24.302 13:42:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:22:24.302 13:42:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:22:24.302 13:42:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:22:24.302 13:42:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:22:24.302 13:42:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:22:24.560 /dev/nbd0 00:22:24.560 13:42:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:22:24.560 13:42:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:22:24.560 13:42:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:22:24.560 13:42:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:22:24.560 13:42:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:22:24.560 13:42:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:22:24.560 13:42:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:22:24.560 13:42:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:22:24.560 13:42:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:22:24.560 13:42:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:22:24.560 13:42:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:24.560 1+0 records in 00:22:24.560 1+0 records out 00:22:24.560 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00023695 s, 17.3 MB/s 00:22:24.560 13:42:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:24.560 13:42:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:22:24.560 13:42:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:24.560 13:42:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:22:24.560 13:42:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:22:24.560 13:42:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:24.560 13:42:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:22:24.560 13:42:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:22:24.817 /dev/nbd1 00:22:24.817 13:42:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:22:24.817 13:42:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:22:24.817 13:42:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:22:24.817 13:42:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:22:24.817 13:42:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:22:24.817 13:42:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:22:24.818 13:42:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:22:24.818 13:42:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:22:24.818 13:42:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:22:24.818 13:42:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:22:24.818 13:42:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:24.818 1+0 records in 00:22:24.818 1+0 records out 00:22:24.818 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000685106 s, 6.0 MB/s 00:22:24.818 13:42:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:24.818 13:42:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:22:24.818 13:42:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:24.818 13:42:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:22:24.818 13:42:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:22:24.818 13:42:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:24.818 13:42:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:22:24.818 13:42:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:22:25.076 13:42:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:22:25.076 13:42:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:22:25.076 13:42:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:22:25.076 13:42:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:22:25.076 13:42:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:22:25.076 13:42:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:25.076 13:42:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:22:25.334 13:42:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:22:25.334 13:42:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:22:25.334 13:42:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:22:25.334 13:42:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:25.334 13:42:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:25.334 13:42:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:22:25.334 13:42:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:22:25.334 13:42:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:22:25.334 13:42:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:25.334 13:42:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:22:25.594 13:42:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:22:25.594 13:42:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:22:25.594 13:42:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:22:25.594 13:42:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:25.594 13:42:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:25.594 13:42:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:22:25.594 13:42:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:22:25.594 13:42:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:22:25.594 13:42:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:22:25.594 13:42:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:22:25.594 13:42:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:25.594 13:42:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:25.594 13:42:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:25.594 13:42:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:22:25.594 13:42:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:25.594 13:42:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:25.594 [2024-11-20 13:42:24.888976] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:22:25.594 [2024-11-20 13:42:24.889082] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:25.594 [2024-11-20 13:42:24.889117] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:22:25.594 [2024-11-20 13:42:24.889153] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:25.594 [2024-11-20 13:42:24.892128] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:25.594 [2024-11-20 13:42:24.892172] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:22:25.594 [2024-11-20 13:42:24.892278] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:22:25.594 [2024-11-20 13:42:24.892349] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:25.594 [2024-11-20 13:42:24.892512] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:25.594 [2024-11-20 13:42:24.892611] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:25.594 [2024-11-20 13:42:24.892693] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:22:25.594 spare 00:22:25.594 13:42:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:25.594 13:42:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:22:25.594 13:42:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:25.594 13:42:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:25.594 [2024-11-20 13:42:24.992640] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:22:25.594 [2024-11-20 13:42:24.992949] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:22:25.594 [2024-11-20 13:42:24.993386] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000491d0 00:22:25.594 [2024-11-20 13:42:25.001484] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:22:25.594 [2024-11-20 13:42:25.001671] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:22:25.594 [2024-11-20 13:42:25.002082] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:25.594 13:42:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:25.594 13:42:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:22:25.594 13:42:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:25.594 13:42:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:25.594 13:42:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:25.594 13:42:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:25.594 13:42:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:25.594 13:42:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:25.594 13:42:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:25.594 13:42:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:25.594 13:42:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:25.594 13:42:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:25.594 13:42:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:25.594 13:42:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:25.594 13:42:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:25.594 13:42:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:25.594 13:42:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:25.594 "name": "raid_bdev1", 00:22:25.594 "uuid": "8902efb1-b34f-4e7f-b5fe-f0c1fa2dc0b3", 00:22:25.594 "strip_size_kb": 64, 00:22:25.594 "state": "online", 00:22:25.594 "raid_level": "raid5f", 00:22:25.594 "superblock": true, 00:22:25.594 "num_base_bdevs": 4, 00:22:25.594 "num_base_bdevs_discovered": 4, 00:22:25.594 "num_base_bdevs_operational": 4, 00:22:25.594 "base_bdevs_list": [ 00:22:25.594 { 00:22:25.594 "name": "spare", 00:22:25.594 "uuid": "0d8763e0-f475-54b4-a96d-2696aaa95e57", 00:22:25.594 "is_configured": true, 00:22:25.594 "data_offset": 2048, 00:22:25.594 "data_size": 63488 00:22:25.594 }, 00:22:25.594 { 00:22:25.594 "name": "BaseBdev2", 00:22:25.594 "uuid": "b70e4cd6-e1ab-5130-b5a5-78380ff7d316", 00:22:25.594 "is_configured": true, 00:22:25.594 "data_offset": 2048, 00:22:25.594 "data_size": 63488 00:22:25.594 }, 00:22:25.594 { 00:22:25.594 "name": "BaseBdev3", 00:22:25.594 "uuid": "5626b63f-c68c-597a-b879-b4b545c3e4fc", 00:22:25.594 "is_configured": true, 00:22:25.594 "data_offset": 2048, 00:22:25.594 "data_size": 63488 00:22:25.594 }, 00:22:25.594 { 00:22:25.594 "name": "BaseBdev4", 00:22:25.594 "uuid": "0eec3c60-e269-5fd8-9179-a892f1ede12a", 00:22:25.594 "is_configured": true, 00:22:25.594 "data_offset": 2048, 00:22:25.594 "data_size": 63488 00:22:25.594 } 00:22:25.594 ] 00:22:25.594 }' 00:22:25.594 13:42:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:25.594 13:42:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:26.164 13:42:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:26.164 13:42:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:26.164 13:42:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:22:26.164 13:42:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:22:26.164 13:42:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:26.164 13:42:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:26.164 13:42:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:26.164 13:42:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:26.164 13:42:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:26.164 13:42:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:26.164 13:42:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:26.164 "name": "raid_bdev1", 00:22:26.164 "uuid": "8902efb1-b34f-4e7f-b5fe-f0c1fa2dc0b3", 00:22:26.164 "strip_size_kb": 64, 00:22:26.164 "state": "online", 00:22:26.164 "raid_level": "raid5f", 00:22:26.164 "superblock": true, 00:22:26.164 "num_base_bdevs": 4, 00:22:26.164 "num_base_bdevs_discovered": 4, 00:22:26.164 "num_base_bdevs_operational": 4, 00:22:26.164 "base_bdevs_list": [ 00:22:26.164 { 00:22:26.164 "name": "spare", 00:22:26.164 "uuid": "0d8763e0-f475-54b4-a96d-2696aaa95e57", 00:22:26.164 "is_configured": true, 00:22:26.164 "data_offset": 2048, 00:22:26.164 "data_size": 63488 00:22:26.164 }, 00:22:26.164 { 00:22:26.164 "name": "BaseBdev2", 00:22:26.164 "uuid": "b70e4cd6-e1ab-5130-b5a5-78380ff7d316", 00:22:26.164 "is_configured": true, 00:22:26.164 "data_offset": 2048, 00:22:26.164 "data_size": 63488 00:22:26.164 }, 00:22:26.164 { 00:22:26.164 "name": "BaseBdev3", 00:22:26.164 "uuid": "5626b63f-c68c-597a-b879-b4b545c3e4fc", 00:22:26.164 "is_configured": true, 00:22:26.164 "data_offset": 2048, 00:22:26.164 "data_size": 63488 00:22:26.164 }, 00:22:26.164 { 00:22:26.164 "name": "BaseBdev4", 00:22:26.164 "uuid": "0eec3c60-e269-5fd8-9179-a892f1ede12a", 00:22:26.164 "is_configured": true, 00:22:26.164 "data_offset": 2048, 00:22:26.164 "data_size": 63488 00:22:26.164 } 00:22:26.164 ] 00:22:26.164 }' 00:22:26.164 13:42:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:26.164 13:42:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:22:26.164 13:42:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:26.164 13:42:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:22:26.164 13:42:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:26.164 13:42:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:26.164 13:42:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:26.164 13:42:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:22:26.164 13:42:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:26.164 13:42:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:22:26.164 13:42:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:22:26.164 13:42:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:26.164 13:42:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:26.164 [2024-11-20 13:42:25.598555] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:26.164 13:42:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:26.164 13:42:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:22:26.164 13:42:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:26.164 13:42:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:26.164 13:42:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:26.164 13:42:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:26.164 13:42:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:26.164 13:42:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:26.164 13:42:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:26.164 13:42:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:26.164 13:42:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:26.164 13:42:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:26.164 13:42:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:26.164 13:42:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:26.164 13:42:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:26.164 13:42:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:26.423 13:42:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:26.423 "name": "raid_bdev1", 00:22:26.423 "uuid": "8902efb1-b34f-4e7f-b5fe-f0c1fa2dc0b3", 00:22:26.423 "strip_size_kb": 64, 00:22:26.423 "state": "online", 00:22:26.423 "raid_level": "raid5f", 00:22:26.423 "superblock": true, 00:22:26.423 "num_base_bdevs": 4, 00:22:26.423 "num_base_bdevs_discovered": 3, 00:22:26.423 "num_base_bdevs_operational": 3, 00:22:26.423 "base_bdevs_list": [ 00:22:26.423 { 00:22:26.423 "name": null, 00:22:26.423 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:26.423 "is_configured": false, 00:22:26.423 "data_offset": 0, 00:22:26.423 "data_size": 63488 00:22:26.423 }, 00:22:26.423 { 00:22:26.423 "name": "BaseBdev2", 00:22:26.424 "uuid": "b70e4cd6-e1ab-5130-b5a5-78380ff7d316", 00:22:26.424 "is_configured": true, 00:22:26.424 "data_offset": 2048, 00:22:26.424 "data_size": 63488 00:22:26.424 }, 00:22:26.424 { 00:22:26.424 "name": "BaseBdev3", 00:22:26.424 "uuid": "5626b63f-c68c-597a-b879-b4b545c3e4fc", 00:22:26.424 "is_configured": true, 00:22:26.424 "data_offset": 2048, 00:22:26.424 "data_size": 63488 00:22:26.424 }, 00:22:26.424 { 00:22:26.424 "name": "BaseBdev4", 00:22:26.424 "uuid": "0eec3c60-e269-5fd8-9179-a892f1ede12a", 00:22:26.424 "is_configured": true, 00:22:26.424 "data_offset": 2048, 00:22:26.424 "data_size": 63488 00:22:26.424 } 00:22:26.424 ] 00:22:26.424 }' 00:22:26.424 13:42:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:26.424 13:42:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:26.683 13:42:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:22:26.683 13:42:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:26.683 13:42:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:26.683 [2024-11-20 13:42:26.018443] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:26.683 [2024-11-20 13:42:26.018636] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:22:26.683 [2024-11-20 13:42:26.018656] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:22:26.683 [2024-11-20 13:42:26.018714] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:26.683 [2024-11-20 13:42:26.034010] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000492a0 00:22:26.683 13:42:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:26.683 13:42:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:22:26.683 [2024-11-20 13:42:26.044088] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:27.619 13:42:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:27.619 13:42:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:27.619 13:42:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:27.619 13:42:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:27.619 13:42:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:27.619 13:42:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:27.619 13:42:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:27.619 13:42:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:27.619 13:42:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:27.619 13:42:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:27.619 13:42:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:27.619 "name": "raid_bdev1", 00:22:27.619 "uuid": "8902efb1-b34f-4e7f-b5fe-f0c1fa2dc0b3", 00:22:27.619 "strip_size_kb": 64, 00:22:27.619 "state": "online", 00:22:27.619 "raid_level": "raid5f", 00:22:27.619 "superblock": true, 00:22:27.619 "num_base_bdevs": 4, 00:22:27.619 "num_base_bdevs_discovered": 4, 00:22:27.619 "num_base_bdevs_operational": 4, 00:22:27.619 "process": { 00:22:27.619 "type": "rebuild", 00:22:27.619 "target": "spare", 00:22:27.619 "progress": { 00:22:27.619 "blocks": 19200, 00:22:27.619 "percent": 10 00:22:27.619 } 00:22:27.619 }, 00:22:27.619 "base_bdevs_list": [ 00:22:27.619 { 00:22:27.619 "name": "spare", 00:22:27.619 "uuid": "0d8763e0-f475-54b4-a96d-2696aaa95e57", 00:22:27.619 "is_configured": true, 00:22:27.619 "data_offset": 2048, 00:22:27.619 "data_size": 63488 00:22:27.619 }, 00:22:27.619 { 00:22:27.619 "name": "BaseBdev2", 00:22:27.619 "uuid": "b70e4cd6-e1ab-5130-b5a5-78380ff7d316", 00:22:27.619 "is_configured": true, 00:22:27.619 "data_offset": 2048, 00:22:27.619 "data_size": 63488 00:22:27.619 }, 00:22:27.619 { 00:22:27.619 "name": "BaseBdev3", 00:22:27.619 "uuid": "5626b63f-c68c-597a-b879-b4b545c3e4fc", 00:22:27.619 "is_configured": true, 00:22:27.619 "data_offset": 2048, 00:22:27.619 "data_size": 63488 00:22:27.619 }, 00:22:27.619 { 00:22:27.619 "name": "BaseBdev4", 00:22:27.619 "uuid": "0eec3c60-e269-5fd8-9179-a892f1ede12a", 00:22:27.619 "is_configured": true, 00:22:27.619 "data_offset": 2048, 00:22:27.619 "data_size": 63488 00:22:27.619 } 00:22:27.619 ] 00:22:27.619 }' 00:22:27.619 13:42:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:27.878 13:42:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:27.878 13:42:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:27.878 13:42:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:22:27.878 13:42:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:22:27.878 13:42:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:27.878 13:42:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:27.878 [2024-11-20 13:42:27.180004] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:27.878 [2024-11-20 13:42:27.252599] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:22:27.878 [2024-11-20 13:42:27.252927] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:27.878 [2024-11-20 13:42:27.252953] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:27.879 [2024-11-20 13:42:27.252967] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:22:27.879 13:42:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:27.879 13:42:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:22:27.879 13:42:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:27.879 13:42:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:27.879 13:42:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:27.879 13:42:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:27.879 13:42:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:27.879 13:42:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:27.879 13:42:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:27.879 13:42:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:27.879 13:42:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:27.879 13:42:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:27.879 13:42:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:27.879 13:42:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:27.879 13:42:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:27.879 13:42:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:27.879 13:42:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:27.879 "name": "raid_bdev1", 00:22:27.879 "uuid": "8902efb1-b34f-4e7f-b5fe-f0c1fa2dc0b3", 00:22:27.879 "strip_size_kb": 64, 00:22:27.879 "state": "online", 00:22:27.879 "raid_level": "raid5f", 00:22:27.879 "superblock": true, 00:22:27.879 "num_base_bdevs": 4, 00:22:27.879 "num_base_bdevs_discovered": 3, 00:22:27.879 "num_base_bdevs_operational": 3, 00:22:27.879 "base_bdevs_list": [ 00:22:27.879 { 00:22:27.879 "name": null, 00:22:27.879 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:27.879 "is_configured": false, 00:22:27.879 "data_offset": 0, 00:22:27.879 "data_size": 63488 00:22:27.879 }, 00:22:27.879 { 00:22:27.879 "name": "BaseBdev2", 00:22:27.879 "uuid": "b70e4cd6-e1ab-5130-b5a5-78380ff7d316", 00:22:27.879 "is_configured": true, 00:22:27.879 "data_offset": 2048, 00:22:27.879 "data_size": 63488 00:22:27.879 }, 00:22:27.879 { 00:22:27.879 "name": "BaseBdev3", 00:22:27.879 "uuid": "5626b63f-c68c-597a-b879-b4b545c3e4fc", 00:22:27.879 "is_configured": true, 00:22:27.879 "data_offset": 2048, 00:22:27.879 "data_size": 63488 00:22:27.879 }, 00:22:27.879 { 00:22:27.879 "name": "BaseBdev4", 00:22:27.879 "uuid": "0eec3c60-e269-5fd8-9179-a892f1ede12a", 00:22:27.879 "is_configured": true, 00:22:27.879 "data_offset": 2048, 00:22:27.879 "data_size": 63488 00:22:27.879 } 00:22:27.879 ] 00:22:27.879 }' 00:22:27.879 13:42:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:27.879 13:42:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:28.449 13:42:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:22:28.449 13:42:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:28.449 13:42:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:28.449 [2024-11-20 13:42:27.712347] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:22:28.449 [2024-11-20 13:42:27.712548] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:28.449 [2024-11-20 13:42:27.712613] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:22:28.449 [2024-11-20 13:42:27.712715] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:28.449 [2024-11-20 13:42:27.713268] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:28.449 [2024-11-20 13:42:27.713418] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:22:28.449 [2024-11-20 13:42:27.713612] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:22:28.449 [2024-11-20 13:42:27.713714] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:22:28.449 [2024-11-20 13:42:27.713814] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:22:28.449 [2024-11-20 13:42:27.713884] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:28.449 [2024-11-20 13:42:27.729109] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000049370 00:22:28.449 spare 00:22:28.449 13:42:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:28.449 13:42:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:22:28.449 [2024-11-20 13:42:27.738326] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:29.385 13:42:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:29.385 13:42:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:29.385 13:42:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:29.385 13:42:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:29.385 13:42:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:29.385 13:42:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:29.385 13:42:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:29.385 13:42:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:29.385 13:42:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:29.385 13:42:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:29.385 13:42:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:29.385 "name": "raid_bdev1", 00:22:29.385 "uuid": "8902efb1-b34f-4e7f-b5fe-f0c1fa2dc0b3", 00:22:29.385 "strip_size_kb": 64, 00:22:29.385 "state": "online", 00:22:29.385 "raid_level": "raid5f", 00:22:29.385 "superblock": true, 00:22:29.385 "num_base_bdevs": 4, 00:22:29.385 "num_base_bdevs_discovered": 4, 00:22:29.385 "num_base_bdevs_operational": 4, 00:22:29.385 "process": { 00:22:29.385 "type": "rebuild", 00:22:29.385 "target": "spare", 00:22:29.385 "progress": { 00:22:29.385 "blocks": 17280, 00:22:29.385 "percent": 9 00:22:29.385 } 00:22:29.385 }, 00:22:29.385 "base_bdevs_list": [ 00:22:29.385 { 00:22:29.385 "name": "spare", 00:22:29.385 "uuid": "0d8763e0-f475-54b4-a96d-2696aaa95e57", 00:22:29.385 "is_configured": true, 00:22:29.385 "data_offset": 2048, 00:22:29.385 "data_size": 63488 00:22:29.385 }, 00:22:29.385 { 00:22:29.385 "name": "BaseBdev2", 00:22:29.385 "uuid": "b70e4cd6-e1ab-5130-b5a5-78380ff7d316", 00:22:29.385 "is_configured": true, 00:22:29.385 "data_offset": 2048, 00:22:29.385 "data_size": 63488 00:22:29.385 }, 00:22:29.385 { 00:22:29.385 "name": "BaseBdev3", 00:22:29.385 "uuid": "5626b63f-c68c-597a-b879-b4b545c3e4fc", 00:22:29.385 "is_configured": true, 00:22:29.385 "data_offset": 2048, 00:22:29.385 "data_size": 63488 00:22:29.385 }, 00:22:29.385 { 00:22:29.385 "name": "BaseBdev4", 00:22:29.385 "uuid": "0eec3c60-e269-5fd8-9179-a892f1ede12a", 00:22:29.385 "is_configured": true, 00:22:29.385 "data_offset": 2048, 00:22:29.385 "data_size": 63488 00:22:29.385 } 00:22:29.385 ] 00:22:29.385 }' 00:22:29.385 13:42:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:29.385 13:42:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:29.385 13:42:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:29.385 13:42:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:22:29.385 13:42:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:22:29.385 13:42:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:29.385 13:42:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:29.645 [2024-11-20 13:42:28.871451] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:29.645 [2024-11-20 13:42:28.950647] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:22:29.645 [2024-11-20 13:42:28.950882] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:29.645 [2024-11-20 13:42:28.950917] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:29.645 [2024-11-20 13:42:28.950932] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:22:29.645 13:42:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:29.645 13:42:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:22:29.645 13:42:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:29.645 13:42:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:29.645 13:42:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:29.645 13:42:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:29.645 13:42:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:29.645 13:42:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:29.645 13:42:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:29.645 13:42:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:29.645 13:42:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:29.645 13:42:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:29.645 13:42:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:29.645 13:42:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:29.645 13:42:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:29.645 13:42:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:29.645 13:42:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:29.645 "name": "raid_bdev1", 00:22:29.645 "uuid": "8902efb1-b34f-4e7f-b5fe-f0c1fa2dc0b3", 00:22:29.645 "strip_size_kb": 64, 00:22:29.645 "state": "online", 00:22:29.645 "raid_level": "raid5f", 00:22:29.645 "superblock": true, 00:22:29.645 "num_base_bdevs": 4, 00:22:29.645 "num_base_bdevs_discovered": 3, 00:22:29.645 "num_base_bdevs_operational": 3, 00:22:29.645 "base_bdevs_list": [ 00:22:29.645 { 00:22:29.645 "name": null, 00:22:29.645 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:29.645 "is_configured": false, 00:22:29.645 "data_offset": 0, 00:22:29.645 "data_size": 63488 00:22:29.645 }, 00:22:29.645 { 00:22:29.645 "name": "BaseBdev2", 00:22:29.645 "uuid": "b70e4cd6-e1ab-5130-b5a5-78380ff7d316", 00:22:29.645 "is_configured": true, 00:22:29.645 "data_offset": 2048, 00:22:29.645 "data_size": 63488 00:22:29.645 }, 00:22:29.645 { 00:22:29.645 "name": "BaseBdev3", 00:22:29.645 "uuid": "5626b63f-c68c-597a-b879-b4b545c3e4fc", 00:22:29.645 "is_configured": true, 00:22:29.645 "data_offset": 2048, 00:22:29.645 "data_size": 63488 00:22:29.645 }, 00:22:29.645 { 00:22:29.645 "name": "BaseBdev4", 00:22:29.645 "uuid": "0eec3c60-e269-5fd8-9179-a892f1ede12a", 00:22:29.645 "is_configured": true, 00:22:29.645 "data_offset": 2048, 00:22:29.645 "data_size": 63488 00:22:29.645 } 00:22:29.645 ] 00:22:29.645 }' 00:22:29.645 13:42:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:29.645 13:42:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:29.904 13:42:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:29.904 13:42:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:29.904 13:42:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:22:29.904 13:42:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:22:29.904 13:42:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:29.904 13:42:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:29.904 13:42:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:29.904 13:42:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:29.904 13:42:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:30.162 13:42:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:30.162 13:42:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:30.162 "name": "raid_bdev1", 00:22:30.162 "uuid": "8902efb1-b34f-4e7f-b5fe-f0c1fa2dc0b3", 00:22:30.162 "strip_size_kb": 64, 00:22:30.162 "state": "online", 00:22:30.162 "raid_level": "raid5f", 00:22:30.162 "superblock": true, 00:22:30.162 "num_base_bdevs": 4, 00:22:30.162 "num_base_bdevs_discovered": 3, 00:22:30.162 "num_base_bdevs_operational": 3, 00:22:30.162 "base_bdevs_list": [ 00:22:30.162 { 00:22:30.162 "name": null, 00:22:30.162 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:30.162 "is_configured": false, 00:22:30.162 "data_offset": 0, 00:22:30.162 "data_size": 63488 00:22:30.162 }, 00:22:30.162 { 00:22:30.162 "name": "BaseBdev2", 00:22:30.162 "uuid": "b70e4cd6-e1ab-5130-b5a5-78380ff7d316", 00:22:30.162 "is_configured": true, 00:22:30.162 "data_offset": 2048, 00:22:30.162 "data_size": 63488 00:22:30.162 }, 00:22:30.162 { 00:22:30.162 "name": "BaseBdev3", 00:22:30.162 "uuid": "5626b63f-c68c-597a-b879-b4b545c3e4fc", 00:22:30.162 "is_configured": true, 00:22:30.162 "data_offset": 2048, 00:22:30.162 "data_size": 63488 00:22:30.162 }, 00:22:30.162 { 00:22:30.162 "name": "BaseBdev4", 00:22:30.162 "uuid": "0eec3c60-e269-5fd8-9179-a892f1ede12a", 00:22:30.162 "is_configured": true, 00:22:30.162 "data_offset": 2048, 00:22:30.162 "data_size": 63488 00:22:30.162 } 00:22:30.162 ] 00:22:30.162 }' 00:22:30.162 13:42:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:30.162 13:42:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:22:30.162 13:42:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:30.162 13:42:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:22:30.162 13:42:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:22:30.162 13:42:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:30.162 13:42:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:30.162 13:42:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:30.162 13:42:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:22:30.162 13:42:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:30.162 13:42:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:30.162 [2024-11-20 13:42:29.518370] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:22:30.162 [2024-11-20 13:42:29.518553] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:30.162 [2024-11-20 13:42:29.518591] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:22:30.162 [2024-11-20 13:42:29.518604] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:30.162 [2024-11-20 13:42:29.519105] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:30.162 [2024-11-20 13:42:29.519127] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:22:30.162 [2024-11-20 13:42:29.519215] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:22:30.162 [2024-11-20 13:42:29.519231] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:22:30.162 [2024-11-20 13:42:29.519247] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:22:30.162 [2024-11-20 13:42:29.519259] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:22:30.162 BaseBdev1 00:22:30.162 13:42:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:30.162 13:42:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:22:31.096 13:42:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:22:31.096 13:42:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:31.096 13:42:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:31.096 13:42:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:31.096 13:42:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:31.096 13:42:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:31.096 13:42:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:31.096 13:42:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:31.096 13:42:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:31.096 13:42:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:31.096 13:42:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:31.096 13:42:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:31.096 13:42:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:31.096 13:42:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:31.096 13:42:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:31.096 13:42:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:31.096 "name": "raid_bdev1", 00:22:31.096 "uuid": "8902efb1-b34f-4e7f-b5fe-f0c1fa2dc0b3", 00:22:31.096 "strip_size_kb": 64, 00:22:31.096 "state": "online", 00:22:31.096 "raid_level": "raid5f", 00:22:31.096 "superblock": true, 00:22:31.096 "num_base_bdevs": 4, 00:22:31.096 "num_base_bdevs_discovered": 3, 00:22:31.096 "num_base_bdevs_operational": 3, 00:22:31.096 "base_bdevs_list": [ 00:22:31.096 { 00:22:31.096 "name": null, 00:22:31.096 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:31.096 "is_configured": false, 00:22:31.096 "data_offset": 0, 00:22:31.096 "data_size": 63488 00:22:31.096 }, 00:22:31.096 { 00:22:31.096 "name": "BaseBdev2", 00:22:31.096 "uuid": "b70e4cd6-e1ab-5130-b5a5-78380ff7d316", 00:22:31.096 "is_configured": true, 00:22:31.096 "data_offset": 2048, 00:22:31.096 "data_size": 63488 00:22:31.096 }, 00:22:31.096 { 00:22:31.096 "name": "BaseBdev3", 00:22:31.096 "uuid": "5626b63f-c68c-597a-b879-b4b545c3e4fc", 00:22:31.096 "is_configured": true, 00:22:31.096 "data_offset": 2048, 00:22:31.096 "data_size": 63488 00:22:31.096 }, 00:22:31.096 { 00:22:31.096 "name": "BaseBdev4", 00:22:31.096 "uuid": "0eec3c60-e269-5fd8-9179-a892f1ede12a", 00:22:31.096 "is_configured": true, 00:22:31.096 "data_offset": 2048, 00:22:31.096 "data_size": 63488 00:22:31.096 } 00:22:31.096 ] 00:22:31.096 }' 00:22:31.096 13:42:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:31.096 13:42:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:31.664 13:42:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:31.664 13:42:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:31.664 13:42:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:22:31.664 13:42:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:22:31.664 13:42:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:31.664 13:42:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:31.664 13:42:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:31.664 13:42:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:31.664 13:42:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:31.664 13:42:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:31.664 13:42:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:31.664 "name": "raid_bdev1", 00:22:31.664 "uuid": "8902efb1-b34f-4e7f-b5fe-f0c1fa2dc0b3", 00:22:31.664 "strip_size_kb": 64, 00:22:31.664 "state": "online", 00:22:31.664 "raid_level": "raid5f", 00:22:31.664 "superblock": true, 00:22:31.664 "num_base_bdevs": 4, 00:22:31.664 "num_base_bdevs_discovered": 3, 00:22:31.664 "num_base_bdevs_operational": 3, 00:22:31.664 "base_bdevs_list": [ 00:22:31.664 { 00:22:31.664 "name": null, 00:22:31.664 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:31.664 "is_configured": false, 00:22:31.664 "data_offset": 0, 00:22:31.664 "data_size": 63488 00:22:31.664 }, 00:22:31.664 { 00:22:31.664 "name": "BaseBdev2", 00:22:31.664 "uuid": "b70e4cd6-e1ab-5130-b5a5-78380ff7d316", 00:22:31.664 "is_configured": true, 00:22:31.664 "data_offset": 2048, 00:22:31.664 "data_size": 63488 00:22:31.664 }, 00:22:31.664 { 00:22:31.664 "name": "BaseBdev3", 00:22:31.664 "uuid": "5626b63f-c68c-597a-b879-b4b545c3e4fc", 00:22:31.664 "is_configured": true, 00:22:31.664 "data_offset": 2048, 00:22:31.664 "data_size": 63488 00:22:31.664 }, 00:22:31.664 { 00:22:31.664 "name": "BaseBdev4", 00:22:31.664 "uuid": "0eec3c60-e269-5fd8-9179-a892f1ede12a", 00:22:31.664 "is_configured": true, 00:22:31.664 "data_offset": 2048, 00:22:31.664 "data_size": 63488 00:22:31.664 } 00:22:31.664 ] 00:22:31.664 }' 00:22:31.664 13:42:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:31.664 13:42:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:22:31.664 13:42:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:31.664 13:42:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:22:31.664 13:42:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:22:31.664 13:42:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:22:31.664 13:42:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:22:31.664 13:42:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:22:31.664 13:42:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:31.664 13:42:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:22:31.664 13:42:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:31.664 13:42:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:22:31.664 13:42:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:31.664 13:42:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:31.664 [2024-11-20 13:42:31.118478] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:31.664 [2024-11-20 13:42:31.118649] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:22:31.664 [2024-11-20 13:42:31.118671] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:22:31.664 request: 00:22:31.664 { 00:22:31.664 "base_bdev": "BaseBdev1", 00:22:31.664 "raid_bdev": "raid_bdev1", 00:22:31.664 "method": "bdev_raid_add_base_bdev", 00:22:31.664 "req_id": 1 00:22:31.664 } 00:22:31.664 Got JSON-RPC error response 00:22:31.664 response: 00:22:31.664 { 00:22:31.664 "code": -22, 00:22:31.664 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:22:31.664 } 00:22:31.664 13:42:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:22:31.664 13:42:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:22:31.664 13:42:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:31.664 13:42:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:31.664 13:42:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:31.664 13:42:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:22:32.676 13:42:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:22:32.676 13:42:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:32.676 13:42:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:32.676 13:42:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:32.676 13:42:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:32.676 13:42:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:32.676 13:42:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:32.676 13:42:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:32.676 13:42:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:32.676 13:42:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:32.676 13:42:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:32.676 13:42:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:32.676 13:42:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:32.676 13:42:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:32.935 13:42:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:32.935 13:42:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:32.935 "name": "raid_bdev1", 00:22:32.935 "uuid": "8902efb1-b34f-4e7f-b5fe-f0c1fa2dc0b3", 00:22:32.935 "strip_size_kb": 64, 00:22:32.935 "state": "online", 00:22:32.935 "raid_level": "raid5f", 00:22:32.935 "superblock": true, 00:22:32.935 "num_base_bdevs": 4, 00:22:32.935 "num_base_bdevs_discovered": 3, 00:22:32.935 "num_base_bdevs_operational": 3, 00:22:32.935 "base_bdevs_list": [ 00:22:32.935 { 00:22:32.935 "name": null, 00:22:32.935 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:32.935 "is_configured": false, 00:22:32.935 "data_offset": 0, 00:22:32.935 "data_size": 63488 00:22:32.935 }, 00:22:32.935 { 00:22:32.935 "name": "BaseBdev2", 00:22:32.935 "uuid": "b70e4cd6-e1ab-5130-b5a5-78380ff7d316", 00:22:32.935 "is_configured": true, 00:22:32.935 "data_offset": 2048, 00:22:32.935 "data_size": 63488 00:22:32.935 }, 00:22:32.935 { 00:22:32.935 "name": "BaseBdev3", 00:22:32.935 "uuid": "5626b63f-c68c-597a-b879-b4b545c3e4fc", 00:22:32.935 "is_configured": true, 00:22:32.935 "data_offset": 2048, 00:22:32.935 "data_size": 63488 00:22:32.935 }, 00:22:32.935 { 00:22:32.935 "name": "BaseBdev4", 00:22:32.935 "uuid": "0eec3c60-e269-5fd8-9179-a892f1ede12a", 00:22:32.935 "is_configured": true, 00:22:32.935 "data_offset": 2048, 00:22:32.935 "data_size": 63488 00:22:32.935 } 00:22:32.935 ] 00:22:32.935 }' 00:22:32.935 13:42:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:32.935 13:42:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:33.194 13:42:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:33.194 13:42:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:33.194 13:42:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:22:33.194 13:42:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:22:33.194 13:42:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:33.194 13:42:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:33.194 13:42:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:33.194 13:42:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:33.194 13:42:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:33.194 13:42:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:33.194 13:42:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:33.194 "name": "raid_bdev1", 00:22:33.194 "uuid": "8902efb1-b34f-4e7f-b5fe-f0c1fa2dc0b3", 00:22:33.194 "strip_size_kb": 64, 00:22:33.194 "state": "online", 00:22:33.194 "raid_level": "raid5f", 00:22:33.194 "superblock": true, 00:22:33.194 "num_base_bdevs": 4, 00:22:33.194 "num_base_bdevs_discovered": 3, 00:22:33.194 "num_base_bdevs_operational": 3, 00:22:33.194 "base_bdevs_list": [ 00:22:33.194 { 00:22:33.194 "name": null, 00:22:33.194 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:33.194 "is_configured": false, 00:22:33.194 "data_offset": 0, 00:22:33.194 "data_size": 63488 00:22:33.194 }, 00:22:33.194 { 00:22:33.194 "name": "BaseBdev2", 00:22:33.194 "uuid": "b70e4cd6-e1ab-5130-b5a5-78380ff7d316", 00:22:33.194 "is_configured": true, 00:22:33.194 "data_offset": 2048, 00:22:33.194 "data_size": 63488 00:22:33.194 }, 00:22:33.194 { 00:22:33.194 "name": "BaseBdev3", 00:22:33.194 "uuid": "5626b63f-c68c-597a-b879-b4b545c3e4fc", 00:22:33.194 "is_configured": true, 00:22:33.194 "data_offset": 2048, 00:22:33.194 "data_size": 63488 00:22:33.194 }, 00:22:33.194 { 00:22:33.194 "name": "BaseBdev4", 00:22:33.194 "uuid": "0eec3c60-e269-5fd8-9179-a892f1ede12a", 00:22:33.194 "is_configured": true, 00:22:33.194 "data_offset": 2048, 00:22:33.194 "data_size": 63488 00:22:33.194 } 00:22:33.194 ] 00:22:33.194 }' 00:22:33.194 13:42:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:33.194 13:42:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:22:33.194 13:42:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:33.453 13:42:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:22:33.453 13:42:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 84926 00:22:33.453 13:42:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 84926 ']' 00:22:33.453 13:42:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 84926 00:22:33.453 13:42:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:22:33.453 13:42:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:33.453 13:42:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84926 00:22:33.453 killing process with pid 84926 00:22:33.453 Received shutdown signal, test time was about 60.000000 seconds 00:22:33.453 00:22:33.453 Latency(us) 00:22:33.453 [2024-11-20T13:42:32.938Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:33.453 [2024-11-20T13:42:32.938Z] =================================================================================================================== 00:22:33.453 [2024-11-20T13:42:32.938Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:33.453 13:42:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:33.453 13:42:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:33.453 13:42:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84926' 00:22:33.453 13:42:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 84926 00:22:33.453 13:42:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 84926 00:22:33.453 [2024-11-20 13:42:32.747721] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:33.453 [2024-11-20 13:42:32.747866] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:33.453 [2024-11-20 13:42:32.747954] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:33.453 [2024-11-20 13:42:32.747971] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:22:34.020 [2024-11-20 13:42:33.253325] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:34.957 ************************************ 00:22:34.957 END TEST raid5f_rebuild_test_sb 00:22:34.957 ************************************ 00:22:34.957 13:42:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:22:34.957 00:22:34.957 real 0m27.410s 00:22:34.957 user 0m34.129s 00:22:34.957 sys 0m3.605s 00:22:34.957 13:42:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:34.957 13:42:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:34.957 13:42:34 bdev_raid -- bdev/bdev_raid.sh@995 -- # base_blocklen=4096 00:22:34.957 13:42:34 bdev_raid -- bdev/bdev_raid.sh@997 -- # run_test raid_state_function_test_sb_4k raid_state_function_test raid1 2 true 00:22:34.957 13:42:34 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:22:34.957 13:42:34 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:34.957 13:42:34 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:22:35.215 ************************************ 00:22:35.215 START TEST raid_state_function_test_sb_4k 00:22:35.215 ************************************ 00:22:35.215 13:42:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:22:35.215 13:42:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:22:35.215 13:42:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:22:35.215 13:42:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:22:35.216 13:42:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:22:35.216 13:42:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:22:35.216 13:42:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:22:35.216 13:42:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:22:35.216 13:42:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:22:35.216 13:42:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:22:35.216 13:42:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:22:35.216 13:42:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:22:35.216 13:42:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:22:35.216 13:42:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:22:35.216 13:42:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:22:35.216 13:42:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:22:35.216 13:42:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # local strip_size 00:22:35.216 13:42:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:22:35.216 13:42:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:22:35.216 13:42:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:22:35.216 13:42:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:22:35.216 13:42:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:22:35.216 13:42:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:22:35.216 Process raid pid: 85742 00:22:35.216 13:42:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@229 -- # raid_pid=85742 00:22:35.216 13:42:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 85742' 00:22:35.216 13:42:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@231 -- # waitforlisten 85742 00:22:35.216 13:42:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@835 -- # '[' -z 85742 ']' 00:22:35.216 13:42:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:35.216 13:42:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:35.216 13:42:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:22:35.216 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:35.216 13:42:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:35.216 13:42:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:35.216 13:42:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:35.216 [2024-11-20 13:42:34.560664] Starting SPDK v25.01-pre git sha1 82b85d9ca / DPDK 24.03.0 initialization... 00:22:35.216 [2024-11-20 13:42:34.560799] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:35.517 [2024-11-20 13:42:34.746290] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:35.517 [2024-11-20 13:42:34.858289] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:35.776 [2024-11-20 13:42:35.074972] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:35.776 [2024-11-20 13:42:35.075020] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:36.036 13:42:35 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:36.036 13:42:35 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@868 -- # return 0 00:22:36.036 13:42:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:22:36.036 13:42:35 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:36.036 13:42:35 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:36.036 [2024-11-20 13:42:35.411602] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:22:36.036 [2024-11-20 13:42:35.411825] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:22:36.036 [2024-11-20 13:42:35.411939] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:36.036 [2024-11-20 13:42:35.411984] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:36.036 13:42:35 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:36.036 13:42:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:22:36.036 13:42:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:36.036 13:42:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:36.036 13:42:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:36.036 13:42:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:36.036 13:42:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:22:36.036 13:42:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:36.036 13:42:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:36.036 13:42:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:36.036 13:42:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:36.036 13:42:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:36.036 13:42:35 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:36.036 13:42:35 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:36.036 13:42:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:36.036 13:42:35 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:36.036 13:42:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:36.036 "name": "Existed_Raid", 00:22:36.036 "uuid": "704524d7-ec24-448e-b166-66a9c34d8c69", 00:22:36.036 "strip_size_kb": 0, 00:22:36.036 "state": "configuring", 00:22:36.036 "raid_level": "raid1", 00:22:36.036 "superblock": true, 00:22:36.036 "num_base_bdevs": 2, 00:22:36.036 "num_base_bdevs_discovered": 0, 00:22:36.036 "num_base_bdevs_operational": 2, 00:22:36.036 "base_bdevs_list": [ 00:22:36.036 { 00:22:36.036 "name": "BaseBdev1", 00:22:36.036 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:36.036 "is_configured": false, 00:22:36.036 "data_offset": 0, 00:22:36.036 "data_size": 0 00:22:36.036 }, 00:22:36.036 { 00:22:36.036 "name": "BaseBdev2", 00:22:36.036 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:36.036 "is_configured": false, 00:22:36.036 "data_offset": 0, 00:22:36.036 "data_size": 0 00:22:36.036 } 00:22:36.036 ] 00:22:36.036 }' 00:22:36.036 13:42:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:36.036 13:42:35 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:36.604 13:42:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:22:36.604 13:42:35 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:36.604 13:42:35 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:36.604 [2024-11-20 13:42:35.854940] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:22:36.604 [2024-11-20 13:42:35.855156] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:22:36.604 13:42:35 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:36.604 13:42:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:22:36.604 13:42:35 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:36.604 13:42:35 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:36.604 [2024-11-20 13:42:35.866928] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:22:36.604 [2024-11-20 13:42:35.867122] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:22:36.604 [2024-11-20 13:42:35.867216] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:36.604 [2024-11-20 13:42:35.867268] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:36.604 13:42:35 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:36.604 13:42:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1 00:22:36.604 13:42:35 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:36.604 13:42:35 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:36.604 BaseBdev1 00:22:36.604 [2024-11-20 13:42:35.918348] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:36.604 13:42:35 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:36.604 13:42:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:22:36.604 13:42:35 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:22:36.604 13:42:35 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:22:36.604 13:42:35 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@905 -- # local i 00:22:36.604 13:42:35 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:22:36.604 13:42:35 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:22:36.604 13:42:35 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:22:36.604 13:42:35 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:36.604 13:42:35 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:36.604 13:42:35 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:36.604 13:42:35 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:22:36.604 13:42:35 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:36.604 13:42:35 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:36.604 [ 00:22:36.604 { 00:22:36.604 "name": "BaseBdev1", 00:22:36.604 "aliases": [ 00:22:36.604 "4fac52c6-e3fb-4617-b0d4-009b8fcbcb85" 00:22:36.604 ], 00:22:36.604 "product_name": "Malloc disk", 00:22:36.604 "block_size": 4096, 00:22:36.604 "num_blocks": 8192, 00:22:36.604 "uuid": "4fac52c6-e3fb-4617-b0d4-009b8fcbcb85", 00:22:36.604 "assigned_rate_limits": { 00:22:36.604 "rw_ios_per_sec": 0, 00:22:36.604 "rw_mbytes_per_sec": 0, 00:22:36.604 "r_mbytes_per_sec": 0, 00:22:36.604 "w_mbytes_per_sec": 0 00:22:36.604 }, 00:22:36.604 "claimed": true, 00:22:36.604 "claim_type": "exclusive_write", 00:22:36.604 "zoned": false, 00:22:36.604 "supported_io_types": { 00:22:36.604 "read": true, 00:22:36.604 "write": true, 00:22:36.605 "unmap": true, 00:22:36.605 "flush": true, 00:22:36.605 "reset": true, 00:22:36.605 "nvme_admin": false, 00:22:36.605 "nvme_io": false, 00:22:36.605 "nvme_io_md": false, 00:22:36.605 "write_zeroes": true, 00:22:36.605 "zcopy": true, 00:22:36.605 "get_zone_info": false, 00:22:36.605 "zone_management": false, 00:22:36.605 "zone_append": false, 00:22:36.605 "compare": false, 00:22:36.605 "compare_and_write": false, 00:22:36.605 "abort": true, 00:22:36.605 "seek_hole": false, 00:22:36.605 "seek_data": false, 00:22:36.605 "copy": true, 00:22:36.605 "nvme_iov_md": false 00:22:36.605 }, 00:22:36.605 "memory_domains": [ 00:22:36.605 { 00:22:36.605 "dma_device_id": "system", 00:22:36.605 "dma_device_type": 1 00:22:36.605 }, 00:22:36.605 { 00:22:36.605 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:36.605 "dma_device_type": 2 00:22:36.605 } 00:22:36.605 ], 00:22:36.605 "driver_specific": {} 00:22:36.605 } 00:22:36.605 ] 00:22:36.605 13:42:35 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:36.605 13:42:35 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@911 -- # return 0 00:22:36.605 13:42:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:22:36.605 13:42:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:36.605 13:42:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:36.605 13:42:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:36.605 13:42:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:36.605 13:42:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:22:36.605 13:42:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:36.605 13:42:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:36.605 13:42:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:36.605 13:42:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:36.605 13:42:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:36.605 13:42:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:36.605 13:42:35 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:36.605 13:42:35 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:36.605 13:42:35 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:36.605 13:42:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:36.605 "name": "Existed_Raid", 00:22:36.605 "uuid": "f3b4f93a-4cdb-4035-b189-a8960c6b7711", 00:22:36.605 "strip_size_kb": 0, 00:22:36.605 "state": "configuring", 00:22:36.605 "raid_level": "raid1", 00:22:36.605 "superblock": true, 00:22:36.605 "num_base_bdevs": 2, 00:22:36.605 "num_base_bdevs_discovered": 1, 00:22:36.605 "num_base_bdevs_operational": 2, 00:22:36.605 "base_bdevs_list": [ 00:22:36.605 { 00:22:36.605 "name": "BaseBdev1", 00:22:36.605 "uuid": "4fac52c6-e3fb-4617-b0d4-009b8fcbcb85", 00:22:36.605 "is_configured": true, 00:22:36.605 "data_offset": 256, 00:22:36.605 "data_size": 7936 00:22:36.605 }, 00:22:36.605 { 00:22:36.605 "name": "BaseBdev2", 00:22:36.605 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:36.605 "is_configured": false, 00:22:36.605 "data_offset": 0, 00:22:36.605 "data_size": 0 00:22:36.605 } 00:22:36.605 ] 00:22:36.605 }' 00:22:36.605 13:42:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:36.605 13:42:36 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:37.174 13:42:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:22:37.174 13:42:36 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:37.174 13:42:36 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:37.174 [2024-11-20 13:42:36.425719] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:22:37.174 [2024-11-20 13:42:36.425947] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:22:37.174 13:42:36 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:37.174 13:42:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:22:37.174 13:42:36 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:37.174 13:42:36 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:37.174 [2024-11-20 13:42:36.437761] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:37.174 [2024-11-20 13:42:36.440106] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:37.174 [2024-11-20 13:42:36.440271] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:37.174 13:42:36 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:37.174 13:42:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:22:37.174 13:42:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:22:37.174 13:42:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:22:37.174 13:42:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:37.174 13:42:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:37.174 13:42:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:37.174 13:42:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:37.174 13:42:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:22:37.174 13:42:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:37.174 13:42:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:37.174 13:42:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:37.174 13:42:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:37.174 13:42:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:37.174 13:42:36 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:37.174 13:42:36 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:37.174 13:42:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:37.174 13:42:36 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:37.174 13:42:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:37.174 "name": "Existed_Raid", 00:22:37.174 "uuid": "b664c2fc-2b74-4aef-96a8-76eb56d70a21", 00:22:37.174 "strip_size_kb": 0, 00:22:37.174 "state": "configuring", 00:22:37.174 "raid_level": "raid1", 00:22:37.174 "superblock": true, 00:22:37.174 "num_base_bdevs": 2, 00:22:37.174 "num_base_bdevs_discovered": 1, 00:22:37.174 "num_base_bdevs_operational": 2, 00:22:37.174 "base_bdevs_list": [ 00:22:37.174 { 00:22:37.174 "name": "BaseBdev1", 00:22:37.174 "uuid": "4fac52c6-e3fb-4617-b0d4-009b8fcbcb85", 00:22:37.174 "is_configured": true, 00:22:37.174 "data_offset": 256, 00:22:37.174 "data_size": 7936 00:22:37.174 }, 00:22:37.174 { 00:22:37.174 "name": "BaseBdev2", 00:22:37.174 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:37.174 "is_configured": false, 00:22:37.174 "data_offset": 0, 00:22:37.174 "data_size": 0 00:22:37.174 } 00:22:37.174 ] 00:22:37.174 }' 00:22:37.174 13:42:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:37.174 13:42:36 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:37.435 13:42:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2 00:22:37.435 13:42:36 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:37.435 13:42:36 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:37.695 [2024-11-20 13:42:36.934900] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:37.695 [2024-11-20 13:42:36.935192] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:22:37.695 [2024-11-20 13:42:36.935210] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:22:37.695 [2024-11-20 13:42:36.935483] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:22:37.695 BaseBdev2 00:22:37.695 [2024-11-20 13:42:36.935652] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:22:37.695 [2024-11-20 13:42:36.935673] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:22:37.695 [2024-11-20 13:42:36.935819] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:37.695 13:42:36 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:37.695 13:42:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:22:37.695 13:42:36 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:22:37.695 13:42:36 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:22:37.695 13:42:36 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@905 -- # local i 00:22:37.695 13:42:36 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:22:37.695 13:42:36 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:22:37.695 13:42:36 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:22:37.695 13:42:36 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:37.695 13:42:36 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:37.695 13:42:36 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:37.695 13:42:36 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:22:37.695 13:42:36 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:37.695 13:42:36 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:37.695 [ 00:22:37.695 { 00:22:37.695 "name": "BaseBdev2", 00:22:37.695 "aliases": [ 00:22:37.695 "6786b900-ee84-4cb3-97e6-84b9eca22d4a" 00:22:37.695 ], 00:22:37.695 "product_name": "Malloc disk", 00:22:37.695 "block_size": 4096, 00:22:37.695 "num_blocks": 8192, 00:22:37.695 "uuid": "6786b900-ee84-4cb3-97e6-84b9eca22d4a", 00:22:37.695 "assigned_rate_limits": { 00:22:37.695 "rw_ios_per_sec": 0, 00:22:37.695 "rw_mbytes_per_sec": 0, 00:22:37.695 "r_mbytes_per_sec": 0, 00:22:37.695 "w_mbytes_per_sec": 0 00:22:37.695 }, 00:22:37.695 "claimed": true, 00:22:37.695 "claim_type": "exclusive_write", 00:22:37.695 "zoned": false, 00:22:37.695 "supported_io_types": { 00:22:37.695 "read": true, 00:22:37.695 "write": true, 00:22:37.695 "unmap": true, 00:22:37.695 "flush": true, 00:22:37.695 "reset": true, 00:22:37.695 "nvme_admin": false, 00:22:37.695 "nvme_io": false, 00:22:37.695 "nvme_io_md": false, 00:22:37.695 "write_zeroes": true, 00:22:37.695 "zcopy": true, 00:22:37.695 "get_zone_info": false, 00:22:37.695 "zone_management": false, 00:22:37.695 "zone_append": false, 00:22:37.695 "compare": false, 00:22:37.695 "compare_and_write": false, 00:22:37.695 "abort": true, 00:22:37.695 "seek_hole": false, 00:22:37.695 "seek_data": false, 00:22:37.695 "copy": true, 00:22:37.695 "nvme_iov_md": false 00:22:37.695 }, 00:22:37.695 "memory_domains": [ 00:22:37.695 { 00:22:37.695 "dma_device_id": "system", 00:22:37.695 "dma_device_type": 1 00:22:37.695 }, 00:22:37.695 { 00:22:37.695 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:37.695 "dma_device_type": 2 00:22:37.695 } 00:22:37.695 ], 00:22:37.695 "driver_specific": {} 00:22:37.695 } 00:22:37.695 ] 00:22:37.695 13:42:36 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:37.695 13:42:36 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@911 -- # return 0 00:22:37.695 13:42:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:22:37.695 13:42:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:22:37.695 13:42:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:22:37.695 13:42:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:37.695 13:42:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:37.695 13:42:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:37.695 13:42:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:37.695 13:42:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:22:37.695 13:42:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:37.695 13:42:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:37.695 13:42:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:37.695 13:42:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:37.695 13:42:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:37.695 13:42:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:37.695 13:42:36 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:37.695 13:42:36 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:37.695 13:42:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:37.695 13:42:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:37.696 "name": "Existed_Raid", 00:22:37.696 "uuid": "b664c2fc-2b74-4aef-96a8-76eb56d70a21", 00:22:37.696 "strip_size_kb": 0, 00:22:37.696 "state": "online", 00:22:37.696 "raid_level": "raid1", 00:22:37.696 "superblock": true, 00:22:37.696 "num_base_bdevs": 2, 00:22:37.696 "num_base_bdevs_discovered": 2, 00:22:37.696 "num_base_bdevs_operational": 2, 00:22:37.696 "base_bdevs_list": [ 00:22:37.696 { 00:22:37.696 "name": "BaseBdev1", 00:22:37.696 "uuid": "4fac52c6-e3fb-4617-b0d4-009b8fcbcb85", 00:22:37.696 "is_configured": true, 00:22:37.696 "data_offset": 256, 00:22:37.696 "data_size": 7936 00:22:37.696 }, 00:22:37.696 { 00:22:37.696 "name": "BaseBdev2", 00:22:37.696 "uuid": "6786b900-ee84-4cb3-97e6-84b9eca22d4a", 00:22:37.696 "is_configured": true, 00:22:37.696 "data_offset": 256, 00:22:37.696 "data_size": 7936 00:22:37.696 } 00:22:37.696 ] 00:22:37.696 }' 00:22:37.696 13:42:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:37.696 13:42:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:37.954 13:42:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:22:37.954 13:42:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:22:37.955 13:42:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:22:37.955 13:42:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:22:37.955 13:42:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local name 00:22:37.955 13:42:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:22:37.955 13:42:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:22:37.955 13:42:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:22:37.955 13:42:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:37.955 13:42:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:37.955 [2024-11-20 13:42:37.402647] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:37.955 13:42:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:38.214 13:42:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:38.214 "name": "Existed_Raid", 00:22:38.214 "aliases": [ 00:22:38.214 "b664c2fc-2b74-4aef-96a8-76eb56d70a21" 00:22:38.214 ], 00:22:38.214 "product_name": "Raid Volume", 00:22:38.214 "block_size": 4096, 00:22:38.214 "num_blocks": 7936, 00:22:38.214 "uuid": "b664c2fc-2b74-4aef-96a8-76eb56d70a21", 00:22:38.214 "assigned_rate_limits": { 00:22:38.214 "rw_ios_per_sec": 0, 00:22:38.214 "rw_mbytes_per_sec": 0, 00:22:38.214 "r_mbytes_per_sec": 0, 00:22:38.214 "w_mbytes_per_sec": 0 00:22:38.214 }, 00:22:38.214 "claimed": false, 00:22:38.214 "zoned": false, 00:22:38.214 "supported_io_types": { 00:22:38.214 "read": true, 00:22:38.214 "write": true, 00:22:38.214 "unmap": false, 00:22:38.214 "flush": false, 00:22:38.214 "reset": true, 00:22:38.214 "nvme_admin": false, 00:22:38.214 "nvme_io": false, 00:22:38.214 "nvme_io_md": false, 00:22:38.214 "write_zeroes": true, 00:22:38.214 "zcopy": false, 00:22:38.214 "get_zone_info": false, 00:22:38.214 "zone_management": false, 00:22:38.214 "zone_append": false, 00:22:38.214 "compare": false, 00:22:38.214 "compare_and_write": false, 00:22:38.214 "abort": false, 00:22:38.214 "seek_hole": false, 00:22:38.214 "seek_data": false, 00:22:38.214 "copy": false, 00:22:38.214 "nvme_iov_md": false 00:22:38.214 }, 00:22:38.214 "memory_domains": [ 00:22:38.214 { 00:22:38.214 "dma_device_id": "system", 00:22:38.214 "dma_device_type": 1 00:22:38.214 }, 00:22:38.214 { 00:22:38.214 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:38.214 "dma_device_type": 2 00:22:38.214 }, 00:22:38.214 { 00:22:38.214 "dma_device_id": "system", 00:22:38.214 "dma_device_type": 1 00:22:38.214 }, 00:22:38.214 { 00:22:38.214 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:38.214 "dma_device_type": 2 00:22:38.214 } 00:22:38.214 ], 00:22:38.214 "driver_specific": { 00:22:38.214 "raid": { 00:22:38.214 "uuid": "b664c2fc-2b74-4aef-96a8-76eb56d70a21", 00:22:38.214 "strip_size_kb": 0, 00:22:38.214 "state": "online", 00:22:38.214 "raid_level": "raid1", 00:22:38.214 "superblock": true, 00:22:38.214 "num_base_bdevs": 2, 00:22:38.214 "num_base_bdevs_discovered": 2, 00:22:38.214 "num_base_bdevs_operational": 2, 00:22:38.214 "base_bdevs_list": [ 00:22:38.214 { 00:22:38.214 "name": "BaseBdev1", 00:22:38.214 "uuid": "4fac52c6-e3fb-4617-b0d4-009b8fcbcb85", 00:22:38.214 "is_configured": true, 00:22:38.214 "data_offset": 256, 00:22:38.214 "data_size": 7936 00:22:38.214 }, 00:22:38.214 { 00:22:38.214 "name": "BaseBdev2", 00:22:38.214 "uuid": "6786b900-ee84-4cb3-97e6-84b9eca22d4a", 00:22:38.214 "is_configured": true, 00:22:38.214 "data_offset": 256, 00:22:38.214 "data_size": 7936 00:22:38.214 } 00:22:38.214 ] 00:22:38.214 } 00:22:38.214 } 00:22:38.214 }' 00:22:38.214 13:42:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:22:38.214 13:42:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:22:38.214 BaseBdev2' 00:22:38.214 13:42:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:38.214 13:42:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:22:38.214 13:42:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:38.214 13:42:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:38.214 13:42:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:22:38.214 13:42:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:38.214 13:42:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:38.214 13:42:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:38.214 13:42:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:22:38.214 13:42:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:22:38.214 13:42:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:38.214 13:42:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:38.214 13:42:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:22:38.214 13:42:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:38.214 13:42:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:38.214 13:42:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:38.214 13:42:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:22:38.214 13:42:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:22:38.214 13:42:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:22:38.214 13:42:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:38.214 13:42:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:38.214 [2024-11-20 13:42:37.610462] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:38.472 13:42:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:38.472 13:42:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@260 -- # local expected_state 00:22:38.472 13:42:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:22:38.472 13:42:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 00:22:38.472 13:42:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@199 -- # return 0 00:22:38.472 13:42:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:22:38.472 13:42:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:22:38.472 13:42:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:38.472 13:42:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:38.472 13:42:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:38.472 13:42:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:38.472 13:42:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:22:38.472 13:42:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:38.472 13:42:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:38.472 13:42:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:38.472 13:42:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:38.472 13:42:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:38.473 13:42:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:38.473 13:42:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:38.473 13:42:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:38.473 13:42:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:38.473 13:42:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:38.473 "name": "Existed_Raid", 00:22:38.473 "uuid": "b664c2fc-2b74-4aef-96a8-76eb56d70a21", 00:22:38.473 "strip_size_kb": 0, 00:22:38.473 "state": "online", 00:22:38.473 "raid_level": "raid1", 00:22:38.473 "superblock": true, 00:22:38.473 "num_base_bdevs": 2, 00:22:38.473 "num_base_bdevs_discovered": 1, 00:22:38.473 "num_base_bdevs_operational": 1, 00:22:38.473 "base_bdevs_list": [ 00:22:38.473 { 00:22:38.473 "name": null, 00:22:38.473 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:38.473 "is_configured": false, 00:22:38.473 "data_offset": 0, 00:22:38.473 "data_size": 7936 00:22:38.473 }, 00:22:38.473 { 00:22:38.473 "name": "BaseBdev2", 00:22:38.473 "uuid": "6786b900-ee84-4cb3-97e6-84b9eca22d4a", 00:22:38.473 "is_configured": true, 00:22:38.473 "data_offset": 256, 00:22:38.473 "data_size": 7936 00:22:38.473 } 00:22:38.473 ] 00:22:38.473 }' 00:22:38.473 13:42:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:38.473 13:42:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:38.732 13:42:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:22:38.732 13:42:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:22:38.732 13:42:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:38.732 13:42:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:38.732 13:42:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:22:38.732 13:42:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:38.732 13:42:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:38.732 13:42:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:22:38.732 13:42:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:22:38.732 13:42:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:22:38.732 13:42:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:38.732 13:42:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:38.732 [2024-11-20 13:42:38.197457] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:22:38.732 [2024-11-20 13:42:38.197710] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:38.991 [2024-11-20 13:42:38.296204] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:38.991 [2024-11-20 13:42:38.296463] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:38.991 [2024-11-20 13:42:38.296614] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:22:38.991 13:42:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:38.991 13:42:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:22:38.991 13:42:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:22:38.991 13:42:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:38.991 13:42:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:38.991 13:42:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:38.991 13:42:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:22:38.991 13:42:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:38.991 13:42:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:22:38.991 13:42:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:22:38.991 13:42:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:22:38.991 13:42:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@326 -- # killprocess 85742 00:22:38.991 13:42:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@954 -- # '[' -z 85742 ']' 00:22:38.991 13:42:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@958 -- # kill -0 85742 00:22:38.991 13:42:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@959 -- # uname 00:22:38.991 13:42:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:38.991 13:42:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85742 00:22:38.991 13:42:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:38.991 killing process with pid 85742 00:22:38.991 13:42:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:38.991 13:42:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85742' 00:22:38.991 13:42:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@973 -- # kill 85742 00:22:38.991 [2024-11-20 13:42:38.399323] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:38.991 13:42:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@978 -- # wait 85742 00:22:38.991 [2024-11-20 13:42:38.417138] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:40.363 ************************************ 00:22:40.363 END TEST raid_state_function_test_sb_4k 00:22:40.363 ************************************ 00:22:40.363 13:42:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@328 -- # return 0 00:22:40.363 00:22:40.363 real 0m5.109s 00:22:40.363 user 0m7.286s 00:22:40.363 sys 0m1.004s 00:22:40.363 13:42:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:40.363 13:42:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:40.363 13:42:39 bdev_raid -- bdev/bdev_raid.sh@998 -- # run_test raid_superblock_test_4k raid_superblock_test raid1 2 00:22:40.363 13:42:39 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:22:40.363 13:42:39 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:40.363 13:42:39 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:22:40.363 ************************************ 00:22:40.363 START TEST raid_superblock_test_4k 00:22:40.363 ************************************ 00:22:40.363 13:42:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:22:40.363 13:42:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:22:40.363 13:42:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:22:40.363 13:42:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:22:40.363 13:42:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:22:40.363 13:42:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:22:40.363 13:42:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:22:40.363 13:42:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:22:40.363 13:42:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:22:40.363 13:42:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:22:40.363 13:42:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@399 -- # local strip_size 00:22:40.363 13:42:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:22:40.363 13:42:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:22:40.363 13:42:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:22:40.363 13:42:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:22:40.363 13:42:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:22:40.363 13:42:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@412 -- # raid_pid=85990 00:22:40.364 13:42:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:22:40.364 13:42:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@413 -- # waitforlisten 85990 00:22:40.364 13:42:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@835 -- # '[' -z 85990 ']' 00:22:40.364 13:42:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:40.364 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:40.364 13:42:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:40.364 13:42:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:40.364 13:42:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:40.364 13:42:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:22:40.364 [2024-11-20 13:42:39.748775] Starting SPDK v25.01-pre git sha1 82b85d9ca / DPDK 24.03.0 initialization... 00:22:40.364 [2024-11-20 13:42:39.748901] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85990 ] 00:22:40.623 [2024-11-20 13:42:39.928646] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:40.623 [2024-11-20 13:42:40.040498] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:40.882 [2024-11-20 13:42:40.243638] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:40.882 [2024-11-20 13:42:40.243801] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:41.140 13:42:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:41.140 13:42:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@868 -- # return 0 00:22:41.140 13:42:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:22:41.140 13:42:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:22:41.140 13:42:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:22:41.140 13:42:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:22:41.140 13:42:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:22:41.140 13:42:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:22:41.140 13:42:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:22:41.140 13:42:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:22:41.140 13:42:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc1 00:22:41.140 13:42:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:41.140 13:42:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:22:41.399 malloc1 00:22:41.400 13:42:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:41.400 13:42:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:22:41.400 13:42:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:41.400 13:42:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:22:41.400 [2024-11-20 13:42:40.643636] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:22:41.400 [2024-11-20 13:42:40.643829] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:41.400 [2024-11-20 13:42:40.643863] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:22:41.400 [2024-11-20 13:42:40.643876] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:41.400 [2024-11-20 13:42:40.646249] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:41.400 [2024-11-20 13:42:40.646400] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:22:41.400 pt1 00:22:41.400 13:42:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:41.400 13:42:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:22:41.400 13:42:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:22:41.400 13:42:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:22:41.400 13:42:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:22:41.400 13:42:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:22:41.400 13:42:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:22:41.400 13:42:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:22:41.400 13:42:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:22:41.400 13:42:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc2 00:22:41.400 13:42:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:41.400 13:42:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:22:41.400 malloc2 00:22:41.400 13:42:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:41.400 13:42:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:22:41.400 13:42:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:41.400 13:42:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:22:41.400 [2024-11-20 13:42:40.696485] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:22:41.400 [2024-11-20 13:42:40.696653] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:41.400 [2024-11-20 13:42:40.696716] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:22:41.400 [2024-11-20 13:42:40.696830] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:41.400 [2024-11-20 13:42:40.699221] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:41.400 [2024-11-20 13:42:40.699353] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:22:41.400 pt2 00:22:41.400 13:42:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:41.400 13:42:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:22:41.400 13:42:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:22:41.400 13:42:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:22:41.400 13:42:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:41.400 13:42:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:22:41.400 [2024-11-20 13:42:40.708525] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:22:41.400 [2024-11-20 13:42:40.710786] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:22:41.400 [2024-11-20 13:42:40.711083] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:22:41.400 [2024-11-20 13:42:40.711190] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:22:41.400 [2024-11-20 13:42:40.711487] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:22:41.400 [2024-11-20 13:42:40.711732] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:22:41.400 [2024-11-20 13:42:40.711836] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:22:41.400 [2024-11-20 13:42:40.712118] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:41.400 13:42:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:41.400 13:42:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:22:41.400 13:42:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:41.400 13:42:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:41.400 13:42:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:41.400 13:42:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:41.400 13:42:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:22:41.400 13:42:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:41.400 13:42:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:41.400 13:42:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:41.400 13:42:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:41.400 13:42:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:41.400 13:42:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:41.400 13:42:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:41.400 13:42:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:22:41.400 13:42:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:41.400 13:42:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:41.400 "name": "raid_bdev1", 00:22:41.400 "uuid": "449730b6-f449-4572-b215-1f3745259b2f", 00:22:41.400 "strip_size_kb": 0, 00:22:41.400 "state": "online", 00:22:41.400 "raid_level": "raid1", 00:22:41.400 "superblock": true, 00:22:41.400 "num_base_bdevs": 2, 00:22:41.400 "num_base_bdevs_discovered": 2, 00:22:41.400 "num_base_bdevs_operational": 2, 00:22:41.400 "base_bdevs_list": [ 00:22:41.400 { 00:22:41.400 "name": "pt1", 00:22:41.400 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:41.400 "is_configured": true, 00:22:41.400 "data_offset": 256, 00:22:41.400 "data_size": 7936 00:22:41.400 }, 00:22:41.400 { 00:22:41.400 "name": "pt2", 00:22:41.400 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:41.400 "is_configured": true, 00:22:41.400 "data_offset": 256, 00:22:41.400 "data_size": 7936 00:22:41.400 } 00:22:41.401 ] 00:22:41.401 }' 00:22:41.401 13:42:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:41.401 13:42:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:22:41.661 13:42:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:22:41.661 13:42:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:22:41.661 13:42:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:22:41.661 13:42:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:22:41.661 13:42:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 00:22:41.661 13:42:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:22:41.661 13:42:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:22:41.661 13:42:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:41.661 13:42:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:22:41.661 13:42:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:22:41.661 [2024-11-20 13:42:41.124308] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:41.921 13:42:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:41.921 13:42:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:41.921 "name": "raid_bdev1", 00:22:41.921 "aliases": [ 00:22:41.921 "449730b6-f449-4572-b215-1f3745259b2f" 00:22:41.921 ], 00:22:41.921 "product_name": "Raid Volume", 00:22:41.921 "block_size": 4096, 00:22:41.921 "num_blocks": 7936, 00:22:41.921 "uuid": "449730b6-f449-4572-b215-1f3745259b2f", 00:22:41.921 "assigned_rate_limits": { 00:22:41.921 "rw_ios_per_sec": 0, 00:22:41.921 "rw_mbytes_per_sec": 0, 00:22:41.921 "r_mbytes_per_sec": 0, 00:22:41.921 "w_mbytes_per_sec": 0 00:22:41.921 }, 00:22:41.921 "claimed": false, 00:22:41.921 "zoned": false, 00:22:41.921 "supported_io_types": { 00:22:41.921 "read": true, 00:22:41.921 "write": true, 00:22:41.921 "unmap": false, 00:22:41.921 "flush": false, 00:22:41.921 "reset": true, 00:22:41.921 "nvme_admin": false, 00:22:41.921 "nvme_io": false, 00:22:41.921 "nvme_io_md": false, 00:22:41.921 "write_zeroes": true, 00:22:41.921 "zcopy": false, 00:22:41.921 "get_zone_info": false, 00:22:41.921 "zone_management": false, 00:22:41.921 "zone_append": false, 00:22:41.921 "compare": false, 00:22:41.921 "compare_and_write": false, 00:22:41.921 "abort": false, 00:22:41.921 "seek_hole": false, 00:22:41.921 "seek_data": false, 00:22:41.921 "copy": false, 00:22:41.921 "nvme_iov_md": false 00:22:41.921 }, 00:22:41.921 "memory_domains": [ 00:22:41.921 { 00:22:41.921 "dma_device_id": "system", 00:22:41.921 "dma_device_type": 1 00:22:41.921 }, 00:22:41.921 { 00:22:41.921 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:41.921 "dma_device_type": 2 00:22:41.921 }, 00:22:41.921 { 00:22:41.921 "dma_device_id": "system", 00:22:41.921 "dma_device_type": 1 00:22:41.921 }, 00:22:41.921 { 00:22:41.921 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:41.921 "dma_device_type": 2 00:22:41.921 } 00:22:41.921 ], 00:22:41.921 "driver_specific": { 00:22:41.921 "raid": { 00:22:41.921 "uuid": "449730b6-f449-4572-b215-1f3745259b2f", 00:22:41.921 "strip_size_kb": 0, 00:22:41.921 "state": "online", 00:22:41.921 "raid_level": "raid1", 00:22:41.921 "superblock": true, 00:22:41.921 "num_base_bdevs": 2, 00:22:41.921 "num_base_bdevs_discovered": 2, 00:22:41.921 "num_base_bdevs_operational": 2, 00:22:41.921 "base_bdevs_list": [ 00:22:41.921 { 00:22:41.921 "name": "pt1", 00:22:41.921 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:41.921 "is_configured": true, 00:22:41.921 "data_offset": 256, 00:22:41.921 "data_size": 7936 00:22:41.921 }, 00:22:41.921 { 00:22:41.921 "name": "pt2", 00:22:41.921 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:41.921 "is_configured": true, 00:22:41.921 "data_offset": 256, 00:22:41.921 "data_size": 7936 00:22:41.921 } 00:22:41.921 ] 00:22:41.921 } 00:22:41.921 } 00:22:41.921 }' 00:22:41.921 13:42:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:22:41.921 13:42:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:22:41.921 pt2' 00:22:41.921 13:42:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:41.921 13:42:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:22:41.922 13:42:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:41.922 13:42:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:22:41.922 13:42:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:41.922 13:42:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:41.922 13:42:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:22:41.922 13:42:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:41.922 13:42:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:22:41.922 13:42:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:22:41.922 13:42:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:41.922 13:42:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:22:41.922 13:42:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:41.922 13:42:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:22:41.922 13:42:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:41.922 13:42:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:41.922 13:42:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:22:41.922 13:42:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:22:41.922 13:42:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:22:41.922 13:42:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:41.922 13:42:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:22:41.922 13:42:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:22:41.922 [2024-11-20 13:42:41.323975] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:41.922 13:42:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:41.922 13:42:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=449730b6-f449-4572-b215-1f3745259b2f 00:22:41.922 13:42:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@436 -- # '[' -z 449730b6-f449-4572-b215-1f3745259b2f ']' 00:22:41.922 13:42:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:22:41.922 13:42:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:41.922 13:42:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:22:41.922 [2024-11-20 13:42:41.363639] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:41.922 [2024-11-20 13:42:41.363767] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:41.922 [2024-11-20 13:42:41.363972] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:41.922 [2024-11-20 13:42:41.364073] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:41.922 [2024-11-20 13:42:41.364251] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:22:41.922 13:42:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:41.922 13:42:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:22:41.922 13:42:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:41.922 13:42:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:41.922 13:42:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:22:41.922 13:42:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:41.922 13:42:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:22:41.922 13:42:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:22:41.922 13:42:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:22:41.922 13:42:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:22:41.922 13:42:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:41.922 13:42:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:22:42.182 13:42:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:42.182 13:42:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:22:42.182 13:42:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:22:42.182 13:42:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:42.182 13:42:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:22:42.182 13:42:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:42.182 13:42:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:22:42.182 13:42:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:22:42.182 13:42:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:42.182 13:42:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:22:42.182 13:42:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:42.182 13:42:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:22:42.182 13:42:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:22:42.182 13:42:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@652 -- # local es=0 00:22:42.182 13:42:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:22:42.182 13:42:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:22:42.182 13:42:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:42.182 13:42:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:22:42.182 13:42:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:42.182 13:42:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:22:42.182 13:42:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:42.182 13:42:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:22:42.182 [2024-11-20 13:42:41.483486] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:22:42.182 [2024-11-20 13:42:41.485654] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:22:42.182 [2024-11-20 13:42:41.485828] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:22:42.182 [2024-11-20 13:42:41.485892] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:22:42.182 [2024-11-20 13:42:41.485910] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:42.182 [2024-11-20 13:42:41.485922] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:22:42.182 request: 00:22:42.182 { 00:22:42.182 "name": "raid_bdev1", 00:22:42.182 "raid_level": "raid1", 00:22:42.182 "base_bdevs": [ 00:22:42.182 "malloc1", 00:22:42.182 "malloc2" 00:22:42.182 ], 00:22:42.182 "superblock": false, 00:22:42.182 "method": "bdev_raid_create", 00:22:42.182 "req_id": 1 00:22:42.182 } 00:22:42.182 Got JSON-RPC error response 00:22:42.182 response: 00:22:42.182 { 00:22:42.182 "code": -17, 00:22:42.182 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:22:42.182 } 00:22:42.183 13:42:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:22:42.183 13:42:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@655 -- # es=1 00:22:42.183 13:42:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:42.183 13:42:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:42.183 13:42:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:42.183 13:42:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:42.183 13:42:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:42.183 13:42:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:22:42.183 13:42:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:22:42.183 13:42:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:42.183 13:42:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:22:42.183 13:42:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:22:42.183 13:42:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:22:42.183 13:42:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:42.183 13:42:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:22:42.183 [2024-11-20 13:42:41.547387] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:22:42.183 [2024-11-20 13:42:41.547440] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:42.183 [2024-11-20 13:42:41.547461] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:22:42.183 [2024-11-20 13:42:41.547475] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:42.183 [2024-11-20 13:42:41.549873] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:42.183 [2024-11-20 13:42:41.549916] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:22:42.183 [2024-11-20 13:42:41.549987] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:22:42.183 [2024-11-20 13:42:41.550044] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:22:42.183 pt1 00:22:42.183 13:42:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:42.183 13:42:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:22:42.183 13:42:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:42.183 13:42:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:42.183 13:42:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:42.183 13:42:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:42.183 13:42:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:22:42.183 13:42:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:42.183 13:42:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:42.183 13:42:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:42.183 13:42:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:42.183 13:42:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:42.183 13:42:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:42.183 13:42:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:42.183 13:42:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:22:42.183 13:42:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:42.183 13:42:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:42.183 "name": "raid_bdev1", 00:22:42.183 "uuid": "449730b6-f449-4572-b215-1f3745259b2f", 00:22:42.183 "strip_size_kb": 0, 00:22:42.183 "state": "configuring", 00:22:42.183 "raid_level": "raid1", 00:22:42.183 "superblock": true, 00:22:42.183 "num_base_bdevs": 2, 00:22:42.183 "num_base_bdevs_discovered": 1, 00:22:42.183 "num_base_bdevs_operational": 2, 00:22:42.183 "base_bdevs_list": [ 00:22:42.183 { 00:22:42.183 "name": "pt1", 00:22:42.183 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:42.183 "is_configured": true, 00:22:42.183 "data_offset": 256, 00:22:42.183 "data_size": 7936 00:22:42.183 }, 00:22:42.183 { 00:22:42.183 "name": null, 00:22:42.183 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:42.183 "is_configured": false, 00:22:42.183 "data_offset": 256, 00:22:42.183 "data_size": 7936 00:22:42.183 } 00:22:42.183 ] 00:22:42.183 }' 00:22:42.183 13:42:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:42.183 13:42:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:22:42.752 13:42:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:22:42.752 13:42:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:22:42.752 13:42:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:22:42.752 13:42:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:22:42.752 13:42:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:42.752 13:42:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:22:42.752 [2024-11-20 13:42:41.991075] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:22:42.752 [2024-11-20 13:42:41.991300] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:42.752 [2024-11-20 13:42:41.991356] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:22:42.752 [2024-11-20 13:42:41.991372] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:42.752 [2024-11-20 13:42:41.991829] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:42.752 [2024-11-20 13:42:41.991859] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:22:42.752 [2024-11-20 13:42:41.991946] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:22:42.752 [2024-11-20 13:42:41.991976] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:22:42.752 [2024-11-20 13:42:41.992111] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:22:42.752 [2024-11-20 13:42:41.992125] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:22:42.752 [2024-11-20 13:42:41.992379] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:22:42.752 [2024-11-20 13:42:41.992534] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:22:42.752 [2024-11-20 13:42:41.992543] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:22:42.752 [2024-11-20 13:42:41.992675] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:42.752 pt2 00:22:42.752 13:42:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:42.752 13:42:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:22:42.752 13:42:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:22:42.752 13:42:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:22:42.752 13:42:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:42.752 13:42:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:42.752 13:42:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:42.752 13:42:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:42.752 13:42:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:22:42.752 13:42:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:42.752 13:42:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:42.752 13:42:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:42.752 13:42:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:42.752 13:42:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:42.752 13:42:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:42.752 13:42:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:22:42.752 13:42:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:42.752 13:42:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:42.752 13:42:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:42.752 "name": "raid_bdev1", 00:22:42.752 "uuid": "449730b6-f449-4572-b215-1f3745259b2f", 00:22:42.752 "strip_size_kb": 0, 00:22:42.752 "state": "online", 00:22:42.752 "raid_level": "raid1", 00:22:42.752 "superblock": true, 00:22:42.752 "num_base_bdevs": 2, 00:22:42.752 "num_base_bdevs_discovered": 2, 00:22:42.752 "num_base_bdevs_operational": 2, 00:22:42.752 "base_bdevs_list": [ 00:22:42.752 { 00:22:42.752 "name": "pt1", 00:22:42.752 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:42.752 "is_configured": true, 00:22:42.752 "data_offset": 256, 00:22:42.752 "data_size": 7936 00:22:42.752 }, 00:22:42.752 { 00:22:42.752 "name": "pt2", 00:22:42.752 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:42.752 "is_configured": true, 00:22:42.752 "data_offset": 256, 00:22:42.752 "data_size": 7936 00:22:42.752 } 00:22:42.752 ] 00:22:42.752 }' 00:22:42.752 13:42:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:42.752 13:42:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:22:43.012 13:42:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:22:43.012 13:42:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:22:43.012 13:42:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:22:43.012 13:42:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:22:43.012 13:42:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 00:22:43.012 13:42:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:22:43.012 13:42:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:22:43.012 13:42:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:22:43.012 13:42:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:43.012 13:42:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:22:43.012 [2024-11-20 13:42:42.418642] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:43.012 13:42:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:43.012 13:42:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:43.012 "name": "raid_bdev1", 00:22:43.012 "aliases": [ 00:22:43.012 "449730b6-f449-4572-b215-1f3745259b2f" 00:22:43.012 ], 00:22:43.012 "product_name": "Raid Volume", 00:22:43.012 "block_size": 4096, 00:22:43.012 "num_blocks": 7936, 00:22:43.012 "uuid": "449730b6-f449-4572-b215-1f3745259b2f", 00:22:43.012 "assigned_rate_limits": { 00:22:43.012 "rw_ios_per_sec": 0, 00:22:43.012 "rw_mbytes_per_sec": 0, 00:22:43.012 "r_mbytes_per_sec": 0, 00:22:43.012 "w_mbytes_per_sec": 0 00:22:43.012 }, 00:22:43.012 "claimed": false, 00:22:43.012 "zoned": false, 00:22:43.012 "supported_io_types": { 00:22:43.012 "read": true, 00:22:43.012 "write": true, 00:22:43.012 "unmap": false, 00:22:43.012 "flush": false, 00:22:43.012 "reset": true, 00:22:43.012 "nvme_admin": false, 00:22:43.012 "nvme_io": false, 00:22:43.012 "nvme_io_md": false, 00:22:43.012 "write_zeroes": true, 00:22:43.012 "zcopy": false, 00:22:43.012 "get_zone_info": false, 00:22:43.012 "zone_management": false, 00:22:43.012 "zone_append": false, 00:22:43.012 "compare": false, 00:22:43.012 "compare_and_write": false, 00:22:43.012 "abort": false, 00:22:43.012 "seek_hole": false, 00:22:43.012 "seek_data": false, 00:22:43.012 "copy": false, 00:22:43.012 "nvme_iov_md": false 00:22:43.012 }, 00:22:43.012 "memory_domains": [ 00:22:43.012 { 00:22:43.012 "dma_device_id": "system", 00:22:43.012 "dma_device_type": 1 00:22:43.012 }, 00:22:43.012 { 00:22:43.012 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:43.012 "dma_device_type": 2 00:22:43.012 }, 00:22:43.012 { 00:22:43.012 "dma_device_id": "system", 00:22:43.012 "dma_device_type": 1 00:22:43.012 }, 00:22:43.012 { 00:22:43.012 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:43.012 "dma_device_type": 2 00:22:43.012 } 00:22:43.012 ], 00:22:43.012 "driver_specific": { 00:22:43.012 "raid": { 00:22:43.012 "uuid": "449730b6-f449-4572-b215-1f3745259b2f", 00:22:43.012 "strip_size_kb": 0, 00:22:43.012 "state": "online", 00:22:43.012 "raid_level": "raid1", 00:22:43.012 "superblock": true, 00:22:43.012 "num_base_bdevs": 2, 00:22:43.012 "num_base_bdevs_discovered": 2, 00:22:43.012 "num_base_bdevs_operational": 2, 00:22:43.012 "base_bdevs_list": [ 00:22:43.012 { 00:22:43.012 "name": "pt1", 00:22:43.012 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:43.012 "is_configured": true, 00:22:43.012 "data_offset": 256, 00:22:43.012 "data_size": 7936 00:22:43.012 }, 00:22:43.012 { 00:22:43.012 "name": "pt2", 00:22:43.012 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:43.012 "is_configured": true, 00:22:43.012 "data_offset": 256, 00:22:43.012 "data_size": 7936 00:22:43.012 } 00:22:43.012 ] 00:22:43.012 } 00:22:43.012 } 00:22:43.012 }' 00:22:43.012 13:42:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:22:43.270 13:42:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:22:43.270 pt2' 00:22:43.270 13:42:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:43.270 13:42:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:22:43.270 13:42:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:43.270 13:42:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:22:43.270 13:42:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:43.270 13:42:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:22:43.270 13:42:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:43.270 13:42:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:43.270 13:42:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:22:43.270 13:42:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:22:43.270 13:42:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:43.270 13:42:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:22:43.270 13:42:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:43.270 13:42:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:43.270 13:42:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:22:43.270 13:42:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:43.270 13:42:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:22:43.270 13:42:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:22:43.270 13:42:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:22:43.270 13:42:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:22:43.270 13:42:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:43.270 13:42:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:22:43.270 [2024-11-20 13:42:42.642437] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:43.270 13:42:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:43.270 13:42:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # '[' 449730b6-f449-4572-b215-1f3745259b2f '!=' 449730b6-f449-4572-b215-1f3745259b2f ']' 00:22:43.270 13:42:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:22:43.270 13:42:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 00:22:43.270 13:42:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@199 -- # return 0 00:22:43.270 13:42:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:22:43.270 13:42:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:43.270 13:42:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:22:43.270 [2024-11-20 13:42:42.682227] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:22:43.270 13:42:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:43.270 13:42:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:43.270 13:42:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:43.270 13:42:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:43.270 13:42:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:43.270 13:42:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:43.270 13:42:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:22:43.270 13:42:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:43.270 13:42:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:43.270 13:42:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:43.270 13:42:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:43.270 13:42:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:43.270 13:42:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:43.270 13:42:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:22:43.270 13:42:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:43.270 13:42:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:43.270 13:42:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:43.270 "name": "raid_bdev1", 00:22:43.270 "uuid": "449730b6-f449-4572-b215-1f3745259b2f", 00:22:43.270 "strip_size_kb": 0, 00:22:43.270 "state": "online", 00:22:43.270 "raid_level": "raid1", 00:22:43.270 "superblock": true, 00:22:43.270 "num_base_bdevs": 2, 00:22:43.270 "num_base_bdevs_discovered": 1, 00:22:43.270 "num_base_bdevs_operational": 1, 00:22:43.270 "base_bdevs_list": [ 00:22:43.270 { 00:22:43.270 "name": null, 00:22:43.270 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:43.270 "is_configured": false, 00:22:43.270 "data_offset": 0, 00:22:43.270 "data_size": 7936 00:22:43.270 }, 00:22:43.270 { 00:22:43.270 "name": "pt2", 00:22:43.270 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:43.270 "is_configured": true, 00:22:43.270 "data_offset": 256, 00:22:43.270 "data_size": 7936 00:22:43.270 } 00:22:43.270 ] 00:22:43.270 }' 00:22:43.270 13:42:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:43.270 13:42:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:22:43.837 13:42:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:22:43.837 13:42:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:43.837 13:42:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:22:43.837 [2024-11-20 13:42:43.121928] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:43.837 [2024-11-20 13:42:43.122077] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:43.837 [2024-11-20 13:42:43.122322] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:43.837 [2024-11-20 13:42:43.122376] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:43.837 [2024-11-20 13:42:43.122391] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:22:43.837 13:42:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:43.837 13:42:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:43.837 13:42:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:43.837 13:42:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:22:43.837 13:42:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:22:43.837 13:42:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:43.837 13:42:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:22:43.837 13:42:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:22:43.837 13:42:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:22:43.837 13:42:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:22:43.837 13:42:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:22:43.837 13:42:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:43.838 13:42:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:22:43.838 13:42:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:43.838 13:42:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:22:43.838 13:42:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:22:43.838 13:42:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:22:43.838 13:42:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:22:43.838 13:42:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@519 -- # i=1 00:22:43.838 13:42:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:22:43.838 13:42:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:43.838 13:42:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:22:43.838 [2024-11-20 13:42:43.193797] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:22:43.838 [2024-11-20 13:42:43.194214] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:43.838 [2024-11-20 13:42:43.194315] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:22:43.838 [2024-11-20 13:42:43.194339] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:43.838 [2024-11-20 13:42:43.196745] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:43.838 [2024-11-20 13:42:43.196788] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:22:43.838 [2024-11-20 13:42:43.196868] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:22:43.838 [2024-11-20 13:42:43.196917] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:22:43.838 [2024-11-20 13:42:43.197023] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:22:43.838 [2024-11-20 13:42:43.197038] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:22:43.838 [2024-11-20 13:42:43.197284] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:22:43.838 [2024-11-20 13:42:43.197438] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:22:43.838 [2024-11-20 13:42:43.197454] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:22:43.838 [2024-11-20 13:42:43.197596] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:43.838 pt2 00:22:43.838 13:42:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:43.838 13:42:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:43.838 13:42:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:43.838 13:42:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:43.838 13:42:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:43.838 13:42:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:43.838 13:42:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:22:43.838 13:42:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:43.838 13:42:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:43.838 13:42:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:43.838 13:42:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:43.838 13:42:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:43.838 13:42:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:43.838 13:42:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:22:43.838 13:42:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:43.838 13:42:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:43.838 13:42:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:43.838 "name": "raid_bdev1", 00:22:43.838 "uuid": "449730b6-f449-4572-b215-1f3745259b2f", 00:22:43.838 "strip_size_kb": 0, 00:22:43.838 "state": "online", 00:22:43.838 "raid_level": "raid1", 00:22:43.838 "superblock": true, 00:22:43.838 "num_base_bdevs": 2, 00:22:43.838 "num_base_bdevs_discovered": 1, 00:22:43.838 "num_base_bdevs_operational": 1, 00:22:43.838 "base_bdevs_list": [ 00:22:43.838 { 00:22:43.838 "name": null, 00:22:43.838 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:43.838 "is_configured": false, 00:22:43.838 "data_offset": 256, 00:22:43.838 "data_size": 7936 00:22:43.838 }, 00:22:43.838 { 00:22:43.838 "name": "pt2", 00:22:43.838 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:43.838 "is_configured": true, 00:22:43.838 "data_offset": 256, 00:22:43.838 "data_size": 7936 00:22:43.838 } 00:22:43.838 ] 00:22:43.838 }' 00:22:43.838 13:42:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:43.838 13:42:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:22:44.097 13:42:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:22:44.097 13:42:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:44.097 13:42:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:22:44.097 [2024-11-20 13:42:43.549291] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:44.097 [2024-11-20 13:42:43.549325] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:44.097 [2024-11-20 13:42:43.549397] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:44.097 [2024-11-20 13:42:43.549449] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:44.097 [2024-11-20 13:42:43.549460] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:22:44.097 13:42:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:44.097 13:42:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:44.097 13:42:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:44.097 13:42:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:22:44.097 13:42:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:22:44.097 13:42:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:44.357 13:42:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:22:44.357 13:42:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:22:44.357 13:42:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:22:44.357 13:42:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:22:44.357 13:42:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:44.357 13:42:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:22:44.357 [2024-11-20 13:42:43.589243] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:22:44.357 [2024-11-20 13:42:43.589310] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:44.357 [2024-11-20 13:42:43.589333] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:22:44.357 [2024-11-20 13:42:43.589344] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:44.357 [2024-11-20 13:42:43.591786] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:44.357 [2024-11-20 13:42:43.591830] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:22:44.357 [2024-11-20 13:42:43.591918] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:22:44.357 [2024-11-20 13:42:43.591966] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:22:44.358 [2024-11-20 13:42:43.592131] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:22:44.358 [2024-11-20 13:42:43.592150] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:44.358 [2024-11-20 13:42:43.592168] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:22:44.358 [2024-11-20 13:42:43.592233] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:22:44.358 [2024-11-20 13:42:43.592304] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:22:44.358 [2024-11-20 13:42:43.592316] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:22:44.358 [2024-11-20 13:42:43.592574] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:22:44.358 [2024-11-20 13:42:43.592716] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:22:44.358 [2024-11-20 13:42:43.592736] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:22:44.358 [2024-11-20 13:42:43.592877] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:44.358 pt1 00:22:44.358 13:42:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:44.358 13:42:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:22:44.358 13:42:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:44.358 13:42:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:44.358 13:42:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:44.358 13:42:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:44.358 13:42:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:44.358 13:42:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:22:44.358 13:42:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:44.358 13:42:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:44.358 13:42:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:44.358 13:42:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:44.358 13:42:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:44.358 13:42:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:44.358 13:42:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:44.358 13:42:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:22:44.358 13:42:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:44.358 13:42:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:44.358 "name": "raid_bdev1", 00:22:44.358 "uuid": "449730b6-f449-4572-b215-1f3745259b2f", 00:22:44.358 "strip_size_kb": 0, 00:22:44.358 "state": "online", 00:22:44.358 "raid_level": "raid1", 00:22:44.358 "superblock": true, 00:22:44.358 "num_base_bdevs": 2, 00:22:44.358 "num_base_bdevs_discovered": 1, 00:22:44.358 "num_base_bdevs_operational": 1, 00:22:44.358 "base_bdevs_list": [ 00:22:44.358 { 00:22:44.358 "name": null, 00:22:44.358 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:44.358 "is_configured": false, 00:22:44.358 "data_offset": 256, 00:22:44.358 "data_size": 7936 00:22:44.358 }, 00:22:44.358 { 00:22:44.358 "name": "pt2", 00:22:44.358 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:44.358 "is_configured": true, 00:22:44.358 "data_offset": 256, 00:22:44.358 "data_size": 7936 00:22:44.358 } 00:22:44.358 ] 00:22:44.358 }' 00:22:44.358 13:42:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:44.358 13:42:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:22:44.617 13:42:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:22:44.617 13:42:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:22:44.617 13:42:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:44.617 13:42:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:22:44.617 13:42:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:44.617 13:42:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:22:44.617 13:42:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:22:44.617 13:42:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:44.617 13:42:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:22:44.617 13:42:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:22:44.617 [2024-11-20 13:42:44.036824] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:44.617 13:42:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:44.617 13:42:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # '[' 449730b6-f449-4572-b215-1f3745259b2f '!=' 449730b6-f449-4572-b215-1f3745259b2f ']' 00:22:44.617 13:42:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@563 -- # killprocess 85990 00:22:44.617 13:42:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@954 -- # '[' -z 85990 ']' 00:22:44.617 13:42:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@958 -- # kill -0 85990 00:22:44.617 13:42:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@959 -- # uname 00:22:44.617 13:42:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:44.617 13:42:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85990 00:22:44.876 13:42:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:44.876 13:42:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:44.876 killing process with pid 85990 00:22:44.876 13:42:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85990' 00:22:44.876 13:42:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@973 -- # kill 85990 00:22:44.876 [2024-11-20 13:42:44.116305] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:44.876 13:42:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@978 -- # wait 85990 00:22:44.876 [2024-11-20 13:42:44.116416] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:44.876 [2024-11-20 13:42:44.116479] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:44.876 [2024-11-20 13:42:44.116501] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:22:44.876 [2024-11-20 13:42:44.322292] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:46.351 13:42:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@565 -- # return 0 00:22:46.351 00:22:46.351 real 0m5.829s 00:22:46.351 user 0m8.702s 00:22:46.351 sys 0m1.154s 00:22:46.351 13:42:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:46.351 13:42:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:22:46.351 ************************************ 00:22:46.351 END TEST raid_superblock_test_4k 00:22:46.351 ************************************ 00:22:46.351 13:42:45 bdev_raid -- bdev/bdev_raid.sh@999 -- # '[' true = true ']' 00:22:46.351 13:42:45 bdev_raid -- bdev/bdev_raid.sh@1000 -- # run_test raid_rebuild_test_sb_4k raid_rebuild_test raid1 2 true false true 00:22:46.351 13:42:45 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:22:46.351 13:42:45 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:46.351 13:42:45 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:22:46.351 ************************************ 00:22:46.351 START TEST raid_rebuild_test_sb_4k 00:22:46.351 ************************************ 00:22:46.351 13:42:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false true 00:22:46.351 13:42:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:22:46.351 13:42:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:22:46.351 13:42:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:22:46.351 13:42:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:22:46.351 13:42:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # local verify=true 00:22:46.351 13:42:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:22:46.351 13:42:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:22:46.351 13:42:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:22:46.351 13:42:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:22:46.351 13:42:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:22:46.351 13:42:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:22:46.351 13:42:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:22:46.351 13:42:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:22:46.351 13:42:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:22:46.351 13:42:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:22:46.351 13:42:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:22:46.351 13:42:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # local strip_size 00:22:46.351 13:42:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@577 -- # local create_arg 00:22:46.351 13:42:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:22:46.351 13:42:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@579 -- # local data_offset 00:22:46.351 13:42:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:22:46.351 13:42:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:22:46.351 13:42:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:22:46.351 13:42:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:22:46.351 13:42:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@597 -- # raid_pid=86314 00:22:46.351 13:42:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:22:46.351 13:42:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@598 -- # waitforlisten 86314 00:22:46.351 13:42:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@835 -- # '[' -z 86314 ']' 00:22:46.351 13:42:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:46.351 13:42:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:46.351 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:46.351 13:42:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:46.351 13:42:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:46.351 13:42:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:46.351 [2024-11-20 13:42:45.662433] Starting SPDK v25.01-pre git sha1 82b85d9ca / DPDK 24.03.0 initialization... 00:22:46.352 [2024-11-20 13:42:45.662559] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86314 ] 00:22:46.352 I/O size of 3145728 is greater than zero copy threshold (65536). 00:22:46.352 Zero copy mechanism will not be used. 00:22:46.611 [2024-11-20 13:42:45.839109] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:46.611 [2024-11-20 13:42:45.956599] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:46.870 [2024-11-20 13:42:46.175242] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:46.870 [2024-11-20 13:42:46.175307] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:47.128 13:42:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:47.128 13:42:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@868 -- # return 0 00:22:47.128 13:42:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:22:47.128 13:42:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1_malloc 00:22:47.128 13:42:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:47.128 13:42:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:47.388 BaseBdev1_malloc 00:22:47.388 13:42:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:47.388 13:42:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:22:47.388 13:42:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:47.388 13:42:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:47.388 [2024-11-20 13:42:46.623194] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:22:47.388 [2024-11-20 13:42:46.623264] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:47.388 [2024-11-20 13:42:46.623287] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:22:47.388 [2024-11-20 13:42:46.623302] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:47.388 [2024-11-20 13:42:46.625629] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:47.388 [2024-11-20 13:42:46.625674] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:22:47.388 BaseBdev1 00:22:47.388 13:42:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:47.388 13:42:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:22:47.388 13:42:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2_malloc 00:22:47.388 13:42:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:47.388 13:42:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:47.388 BaseBdev2_malloc 00:22:47.388 13:42:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:47.388 13:42:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:22:47.388 13:42:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:47.388 13:42:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:47.388 [2024-11-20 13:42:46.679720] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:22:47.388 [2024-11-20 13:42:46.679789] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:47.388 [2024-11-20 13:42:46.679814] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:22:47.388 [2024-11-20 13:42:46.679829] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:47.388 [2024-11-20 13:42:46.682185] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:47.388 [2024-11-20 13:42:46.682226] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:22:47.388 BaseBdev2 00:22:47.388 13:42:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:47.388 13:42:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -b spare_malloc 00:22:47.388 13:42:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:47.388 13:42:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:47.388 spare_malloc 00:22:47.388 13:42:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:47.388 13:42:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:22:47.388 13:42:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:47.388 13:42:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:47.388 spare_delay 00:22:47.388 13:42:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:47.388 13:42:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:22:47.388 13:42:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:47.388 13:42:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:47.388 [2024-11-20 13:42:46.759377] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:22:47.388 [2024-11-20 13:42:46.759441] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:47.388 [2024-11-20 13:42:46.759462] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:22:47.388 [2024-11-20 13:42:46.759476] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:47.388 [2024-11-20 13:42:46.761798] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:47.388 [2024-11-20 13:42:46.761840] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:22:47.388 spare 00:22:47.388 13:42:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:47.388 13:42:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:22:47.388 13:42:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:47.388 13:42:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:47.388 [2024-11-20 13:42:46.771427] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:47.388 [2024-11-20 13:42:46.773451] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:47.388 [2024-11-20 13:42:46.773638] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:22:47.388 [2024-11-20 13:42:46.773655] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:22:47.388 [2024-11-20 13:42:46.773898] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:22:47.388 [2024-11-20 13:42:46.774081] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:22:47.388 [2024-11-20 13:42:46.774100] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:22:47.388 [2024-11-20 13:42:46.774243] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:47.388 13:42:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:47.389 13:42:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:22:47.389 13:42:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:47.389 13:42:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:47.389 13:42:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:47.389 13:42:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:47.389 13:42:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:22:47.389 13:42:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:47.389 13:42:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:47.389 13:42:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:47.389 13:42:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:47.389 13:42:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:47.389 13:42:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:47.389 13:42:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:47.389 13:42:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:47.389 13:42:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:47.389 13:42:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:47.389 "name": "raid_bdev1", 00:22:47.389 "uuid": "ce775387-f092-4f78-9530-9b58b43e3873", 00:22:47.389 "strip_size_kb": 0, 00:22:47.389 "state": "online", 00:22:47.389 "raid_level": "raid1", 00:22:47.389 "superblock": true, 00:22:47.389 "num_base_bdevs": 2, 00:22:47.389 "num_base_bdevs_discovered": 2, 00:22:47.389 "num_base_bdevs_operational": 2, 00:22:47.389 "base_bdevs_list": [ 00:22:47.389 { 00:22:47.389 "name": "BaseBdev1", 00:22:47.389 "uuid": "7f823d1a-91b6-5cd3-bc6c-ad5e9970d056", 00:22:47.389 "is_configured": true, 00:22:47.389 "data_offset": 256, 00:22:47.389 "data_size": 7936 00:22:47.389 }, 00:22:47.389 { 00:22:47.389 "name": "BaseBdev2", 00:22:47.389 "uuid": "a33e8ac8-716d-5e3b-8c11-afc28998c0f6", 00:22:47.389 "is_configured": true, 00:22:47.389 "data_offset": 256, 00:22:47.389 "data_size": 7936 00:22:47.389 } 00:22:47.389 ] 00:22:47.389 }' 00:22:47.389 13:42:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:47.389 13:42:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:47.957 13:42:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:22:47.957 13:42:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:22:47.957 13:42:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:47.957 13:42:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:47.957 [2024-11-20 13:42:47.243228] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:47.957 13:42:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:47.957 13:42:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:22:47.957 13:42:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:22:47.957 13:42:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:47.957 13:42:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:47.957 13:42:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:47.957 13:42:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:47.957 13:42:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:22:47.957 13:42:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:22:47.958 13:42:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:22:47.958 13:42:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:22:47.958 13:42:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:22:47.958 13:42:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:22:47.958 13:42:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:22:47.958 13:42:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:22:47.958 13:42:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:22:47.958 13:42:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:22:47.958 13:42:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:22:47.958 13:42:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:22:47.958 13:42:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:47.958 13:42:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:22:48.217 [2024-11-20 13:42:47.510605] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:22:48.217 /dev/nbd0 00:22:48.217 13:42:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:22:48.217 13:42:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:22:48.217 13:42:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:22:48.217 13:42:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # local i 00:22:48.217 13:42:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:22:48.217 13:42:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:22:48.217 13:42:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:22:48.217 13:42:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@877 -- # break 00:22:48.217 13:42:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:22:48.217 13:42:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:22:48.217 13:42:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:48.217 1+0 records in 00:22:48.217 1+0 records out 00:22:48.217 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000218637 s, 18.7 MB/s 00:22:48.217 13:42:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:48.217 13:42:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # size=4096 00:22:48.217 13:42:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:48.217 13:42:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:22:48.217 13:42:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@893 -- # return 0 00:22:48.217 13:42:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:48.217 13:42:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:48.217 13:42:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:22:48.217 13:42:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:22:48.217 13:42:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:22:49.157 7936+0 records in 00:22:49.157 7936+0 records out 00:22:49.157 32505856 bytes (33 MB, 31 MiB) copied, 0.710428 s, 45.8 MB/s 00:22:49.157 13:42:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:22:49.157 13:42:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:22:49.157 13:42:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:22:49.157 13:42:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:22:49.157 13:42:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:22:49.157 13:42:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:49.157 13:42:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:22:49.157 13:42:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:22:49.157 [2024-11-20 13:42:48.510325] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:49.157 13:42:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:22:49.157 13:42:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:22:49.157 13:42:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:49.157 13:42:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:49.157 13:42:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:22:49.157 13:42:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:22:49.157 13:42:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:22:49.157 13:42:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:22:49.157 13:42:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:49.157 13:42:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:49.157 [2024-11-20 13:42:48.526417] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:49.157 13:42:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:49.157 13:42:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:49.157 13:42:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:49.157 13:42:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:49.157 13:42:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:49.158 13:42:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:49.158 13:42:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:22:49.158 13:42:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:49.158 13:42:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:49.158 13:42:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:49.158 13:42:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:49.158 13:42:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:49.158 13:42:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:49.158 13:42:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:49.158 13:42:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:49.158 13:42:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:49.158 13:42:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:49.158 "name": "raid_bdev1", 00:22:49.158 "uuid": "ce775387-f092-4f78-9530-9b58b43e3873", 00:22:49.158 "strip_size_kb": 0, 00:22:49.158 "state": "online", 00:22:49.158 "raid_level": "raid1", 00:22:49.158 "superblock": true, 00:22:49.158 "num_base_bdevs": 2, 00:22:49.158 "num_base_bdevs_discovered": 1, 00:22:49.158 "num_base_bdevs_operational": 1, 00:22:49.158 "base_bdevs_list": [ 00:22:49.158 { 00:22:49.158 "name": null, 00:22:49.158 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:49.158 "is_configured": false, 00:22:49.158 "data_offset": 0, 00:22:49.158 "data_size": 7936 00:22:49.158 }, 00:22:49.158 { 00:22:49.158 "name": "BaseBdev2", 00:22:49.158 "uuid": "a33e8ac8-716d-5e3b-8c11-afc28998c0f6", 00:22:49.158 "is_configured": true, 00:22:49.158 "data_offset": 256, 00:22:49.158 "data_size": 7936 00:22:49.158 } 00:22:49.158 ] 00:22:49.158 }' 00:22:49.158 13:42:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:49.158 13:42:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:49.725 13:42:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:22:49.725 13:42:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:49.725 13:42:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:49.725 [2024-11-20 13:42:48.974157] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:49.725 [2024-11-20 13:42:48.992579] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d260 00:22:49.725 13:42:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:49.725 13:42:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@647 -- # sleep 1 00:22:49.725 [2024-11-20 13:42:48.994671] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:50.660 13:42:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:50.660 13:42:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:50.660 13:42:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:50.660 13:42:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:50.660 13:42:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:50.660 13:42:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:50.660 13:42:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:50.660 13:42:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:50.660 13:42:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:50.660 13:42:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:50.660 13:42:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:50.660 "name": "raid_bdev1", 00:22:50.661 "uuid": "ce775387-f092-4f78-9530-9b58b43e3873", 00:22:50.661 "strip_size_kb": 0, 00:22:50.661 "state": "online", 00:22:50.661 "raid_level": "raid1", 00:22:50.661 "superblock": true, 00:22:50.661 "num_base_bdevs": 2, 00:22:50.661 "num_base_bdevs_discovered": 2, 00:22:50.661 "num_base_bdevs_operational": 2, 00:22:50.661 "process": { 00:22:50.661 "type": "rebuild", 00:22:50.661 "target": "spare", 00:22:50.661 "progress": { 00:22:50.661 "blocks": 2560, 00:22:50.661 "percent": 32 00:22:50.661 } 00:22:50.661 }, 00:22:50.661 "base_bdevs_list": [ 00:22:50.661 { 00:22:50.661 "name": "spare", 00:22:50.661 "uuid": "62852a27-8e86-543b-8eea-e3e947386116", 00:22:50.661 "is_configured": true, 00:22:50.661 "data_offset": 256, 00:22:50.661 "data_size": 7936 00:22:50.661 }, 00:22:50.661 { 00:22:50.661 "name": "BaseBdev2", 00:22:50.661 "uuid": "a33e8ac8-716d-5e3b-8c11-afc28998c0f6", 00:22:50.661 "is_configured": true, 00:22:50.661 "data_offset": 256, 00:22:50.661 "data_size": 7936 00:22:50.661 } 00:22:50.661 ] 00:22:50.661 }' 00:22:50.661 13:42:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:50.661 13:42:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:50.661 13:42:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:50.661 13:42:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:22:50.661 13:42:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:22:50.661 13:42:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:50.661 13:42:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:50.661 [2024-11-20 13:42:50.122381] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:50.920 [2024-11-20 13:42:50.199761] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:22:50.920 [2024-11-20 13:42:50.199832] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:50.920 [2024-11-20 13:42:50.199848] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:50.920 [2024-11-20 13:42:50.199859] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:22:50.920 13:42:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:50.920 13:42:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:50.920 13:42:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:50.920 13:42:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:50.920 13:42:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:50.920 13:42:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:50.920 13:42:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:22:50.920 13:42:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:50.921 13:42:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:50.921 13:42:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:50.921 13:42:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:50.921 13:42:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:50.921 13:42:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:50.921 13:42:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:50.921 13:42:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:50.921 13:42:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:50.921 13:42:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:50.921 "name": "raid_bdev1", 00:22:50.921 "uuid": "ce775387-f092-4f78-9530-9b58b43e3873", 00:22:50.921 "strip_size_kb": 0, 00:22:50.921 "state": "online", 00:22:50.921 "raid_level": "raid1", 00:22:50.921 "superblock": true, 00:22:50.921 "num_base_bdevs": 2, 00:22:50.921 "num_base_bdevs_discovered": 1, 00:22:50.921 "num_base_bdevs_operational": 1, 00:22:50.921 "base_bdevs_list": [ 00:22:50.921 { 00:22:50.921 "name": null, 00:22:50.921 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:50.921 "is_configured": false, 00:22:50.921 "data_offset": 0, 00:22:50.921 "data_size": 7936 00:22:50.921 }, 00:22:50.921 { 00:22:50.921 "name": "BaseBdev2", 00:22:50.921 "uuid": "a33e8ac8-716d-5e3b-8c11-afc28998c0f6", 00:22:50.921 "is_configured": true, 00:22:50.921 "data_offset": 256, 00:22:50.921 "data_size": 7936 00:22:50.921 } 00:22:50.921 ] 00:22:50.921 }' 00:22:50.921 13:42:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:50.921 13:42:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:51.181 13:42:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:51.181 13:42:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:51.181 13:42:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:22:51.181 13:42:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:22:51.181 13:42:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:51.440 13:42:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:51.440 13:42:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:51.440 13:42:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:51.441 13:42:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:51.441 13:42:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:51.441 13:42:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:51.441 "name": "raid_bdev1", 00:22:51.441 "uuid": "ce775387-f092-4f78-9530-9b58b43e3873", 00:22:51.441 "strip_size_kb": 0, 00:22:51.441 "state": "online", 00:22:51.441 "raid_level": "raid1", 00:22:51.441 "superblock": true, 00:22:51.441 "num_base_bdevs": 2, 00:22:51.441 "num_base_bdevs_discovered": 1, 00:22:51.441 "num_base_bdevs_operational": 1, 00:22:51.441 "base_bdevs_list": [ 00:22:51.441 { 00:22:51.441 "name": null, 00:22:51.441 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:51.441 "is_configured": false, 00:22:51.441 "data_offset": 0, 00:22:51.441 "data_size": 7936 00:22:51.441 }, 00:22:51.441 { 00:22:51.441 "name": "BaseBdev2", 00:22:51.441 "uuid": "a33e8ac8-716d-5e3b-8c11-afc28998c0f6", 00:22:51.441 "is_configured": true, 00:22:51.441 "data_offset": 256, 00:22:51.441 "data_size": 7936 00:22:51.441 } 00:22:51.441 ] 00:22:51.441 }' 00:22:51.441 13:42:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:51.441 13:42:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:22:51.441 13:42:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:51.441 13:42:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:22:51.441 13:42:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:22:51.441 13:42:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:51.441 13:42:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:51.441 [2024-11-20 13:42:50.787330] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:51.441 [2024-11-20 13:42:50.803949] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d330 00:22:51.441 13:42:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:51.441 13:42:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@663 -- # sleep 1 00:22:51.441 [2024-11-20 13:42:50.806035] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:52.376 13:42:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:52.376 13:42:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:52.376 13:42:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:52.376 13:42:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:52.376 13:42:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:52.376 13:42:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:52.376 13:42:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:52.376 13:42:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:52.376 13:42:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:52.376 13:42:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:52.376 13:42:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:52.376 "name": "raid_bdev1", 00:22:52.376 "uuid": "ce775387-f092-4f78-9530-9b58b43e3873", 00:22:52.376 "strip_size_kb": 0, 00:22:52.376 "state": "online", 00:22:52.376 "raid_level": "raid1", 00:22:52.376 "superblock": true, 00:22:52.376 "num_base_bdevs": 2, 00:22:52.376 "num_base_bdevs_discovered": 2, 00:22:52.376 "num_base_bdevs_operational": 2, 00:22:52.376 "process": { 00:22:52.376 "type": "rebuild", 00:22:52.376 "target": "spare", 00:22:52.376 "progress": { 00:22:52.376 "blocks": 2560, 00:22:52.376 "percent": 32 00:22:52.376 } 00:22:52.376 }, 00:22:52.376 "base_bdevs_list": [ 00:22:52.376 { 00:22:52.376 "name": "spare", 00:22:52.376 "uuid": "62852a27-8e86-543b-8eea-e3e947386116", 00:22:52.376 "is_configured": true, 00:22:52.376 "data_offset": 256, 00:22:52.376 "data_size": 7936 00:22:52.376 }, 00:22:52.376 { 00:22:52.376 "name": "BaseBdev2", 00:22:52.376 "uuid": "a33e8ac8-716d-5e3b-8c11-afc28998c0f6", 00:22:52.376 "is_configured": true, 00:22:52.376 "data_offset": 256, 00:22:52.376 "data_size": 7936 00:22:52.376 } 00:22:52.376 ] 00:22:52.376 }' 00:22:52.376 13:42:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:52.634 13:42:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:52.634 13:42:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:52.634 13:42:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:22:52.634 13:42:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:22:52.634 13:42:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:22:52.634 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:22:52.634 13:42:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:22:52.634 13:42:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:22:52.634 13:42:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:22:52.634 13:42:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@706 -- # local timeout=679 00:22:52.634 13:42:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:22:52.634 13:42:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:52.634 13:42:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:52.634 13:42:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:52.634 13:42:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:52.634 13:42:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:52.634 13:42:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:52.634 13:42:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:52.634 13:42:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:52.634 13:42:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:52.634 13:42:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:52.634 13:42:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:52.634 "name": "raid_bdev1", 00:22:52.634 "uuid": "ce775387-f092-4f78-9530-9b58b43e3873", 00:22:52.634 "strip_size_kb": 0, 00:22:52.634 "state": "online", 00:22:52.634 "raid_level": "raid1", 00:22:52.634 "superblock": true, 00:22:52.634 "num_base_bdevs": 2, 00:22:52.634 "num_base_bdevs_discovered": 2, 00:22:52.634 "num_base_bdevs_operational": 2, 00:22:52.634 "process": { 00:22:52.634 "type": "rebuild", 00:22:52.634 "target": "spare", 00:22:52.634 "progress": { 00:22:52.634 "blocks": 2816, 00:22:52.634 "percent": 35 00:22:52.634 } 00:22:52.634 }, 00:22:52.634 "base_bdevs_list": [ 00:22:52.634 { 00:22:52.634 "name": "spare", 00:22:52.634 "uuid": "62852a27-8e86-543b-8eea-e3e947386116", 00:22:52.634 "is_configured": true, 00:22:52.634 "data_offset": 256, 00:22:52.634 "data_size": 7936 00:22:52.634 }, 00:22:52.634 { 00:22:52.634 "name": "BaseBdev2", 00:22:52.634 "uuid": "a33e8ac8-716d-5e3b-8c11-afc28998c0f6", 00:22:52.634 "is_configured": true, 00:22:52.634 "data_offset": 256, 00:22:52.634 "data_size": 7936 00:22:52.634 } 00:22:52.634 ] 00:22:52.634 }' 00:22:52.634 13:42:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:52.634 13:42:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:52.635 13:42:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:52.635 13:42:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:22:52.635 13:42:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 00:22:54.013 13:42:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:22:54.013 13:42:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:54.013 13:42:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:54.013 13:42:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:54.013 13:42:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:54.013 13:42:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:54.013 13:42:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:54.013 13:42:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:54.013 13:42:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:54.013 13:42:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:54.013 13:42:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:54.013 13:42:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:54.013 "name": "raid_bdev1", 00:22:54.013 "uuid": "ce775387-f092-4f78-9530-9b58b43e3873", 00:22:54.013 "strip_size_kb": 0, 00:22:54.013 "state": "online", 00:22:54.013 "raid_level": "raid1", 00:22:54.013 "superblock": true, 00:22:54.013 "num_base_bdevs": 2, 00:22:54.013 "num_base_bdevs_discovered": 2, 00:22:54.013 "num_base_bdevs_operational": 2, 00:22:54.013 "process": { 00:22:54.013 "type": "rebuild", 00:22:54.013 "target": "spare", 00:22:54.013 "progress": { 00:22:54.013 "blocks": 5632, 00:22:54.013 "percent": 70 00:22:54.013 } 00:22:54.013 }, 00:22:54.013 "base_bdevs_list": [ 00:22:54.013 { 00:22:54.013 "name": "spare", 00:22:54.013 "uuid": "62852a27-8e86-543b-8eea-e3e947386116", 00:22:54.013 "is_configured": true, 00:22:54.013 "data_offset": 256, 00:22:54.013 "data_size": 7936 00:22:54.013 }, 00:22:54.013 { 00:22:54.013 "name": "BaseBdev2", 00:22:54.013 "uuid": "a33e8ac8-716d-5e3b-8c11-afc28998c0f6", 00:22:54.013 "is_configured": true, 00:22:54.013 "data_offset": 256, 00:22:54.013 "data_size": 7936 00:22:54.013 } 00:22:54.013 ] 00:22:54.013 }' 00:22:54.013 13:42:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:54.013 13:42:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:54.013 13:42:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:54.013 13:42:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:22:54.013 13:42:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 00:22:54.580 [2024-11-20 13:42:53.919701] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:22:54.580 [2024-11-20 13:42:53.919782] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:22:54.580 [2024-11-20 13:42:53.919905] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:54.840 13:42:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:22:54.841 13:42:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:54.841 13:42:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:54.841 13:42:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:54.841 13:42:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:54.841 13:42:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:54.841 13:42:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:54.841 13:42:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:54.841 13:42:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:54.841 13:42:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:54.841 13:42:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:54.841 13:42:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:54.841 "name": "raid_bdev1", 00:22:54.841 "uuid": "ce775387-f092-4f78-9530-9b58b43e3873", 00:22:54.841 "strip_size_kb": 0, 00:22:54.841 "state": "online", 00:22:54.841 "raid_level": "raid1", 00:22:54.841 "superblock": true, 00:22:54.841 "num_base_bdevs": 2, 00:22:54.841 "num_base_bdevs_discovered": 2, 00:22:54.841 "num_base_bdevs_operational": 2, 00:22:54.841 "base_bdevs_list": [ 00:22:54.841 { 00:22:54.841 "name": "spare", 00:22:54.841 "uuid": "62852a27-8e86-543b-8eea-e3e947386116", 00:22:54.841 "is_configured": true, 00:22:54.841 "data_offset": 256, 00:22:54.841 "data_size": 7936 00:22:54.841 }, 00:22:54.841 { 00:22:54.841 "name": "BaseBdev2", 00:22:54.841 "uuid": "a33e8ac8-716d-5e3b-8c11-afc28998c0f6", 00:22:54.841 "is_configured": true, 00:22:54.841 "data_offset": 256, 00:22:54.841 "data_size": 7936 00:22:54.841 } 00:22:54.841 ] 00:22:54.841 }' 00:22:54.841 13:42:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:54.841 13:42:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:22:54.841 13:42:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:55.101 13:42:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:22:55.101 13:42:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@709 -- # break 00:22:55.101 13:42:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:55.101 13:42:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:55.101 13:42:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:22:55.101 13:42:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:22:55.101 13:42:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:55.101 13:42:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:55.101 13:42:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:55.101 13:42:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:55.101 13:42:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:55.101 13:42:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:55.101 13:42:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:55.101 "name": "raid_bdev1", 00:22:55.101 "uuid": "ce775387-f092-4f78-9530-9b58b43e3873", 00:22:55.101 "strip_size_kb": 0, 00:22:55.101 "state": "online", 00:22:55.101 "raid_level": "raid1", 00:22:55.101 "superblock": true, 00:22:55.101 "num_base_bdevs": 2, 00:22:55.101 "num_base_bdevs_discovered": 2, 00:22:55.101 "num_base_bdevs_operational": 2, 00:22:55.102 "base_bdevs_list": [ 00:22:55.102 { 00:22:55.102 "name": "spare", 00:22:55.102 "uuid": "62852a27-8e86-543b-8eea-e3e947386116", 00:22:55.102 "is_configured": true, 00:22:55.102 "data_offset": 256, 00:22:55.102 "data_size": 7936 00:22:55.102 }, 00:22:55.102 { 00:22:55.102 "name": "BaseBdev2", 00:22:55.102 "uuid": "a33e8ac8-716d-5e3b-8c11-afc28998c0f6", 00:22:55.102 "is_configured": true, 00:22:55.102 "data_offset": 256, 00:22:55.102 "data_size": 7936 00:22:55.102 } 00:22:55.102 ] 00:22:55.102 }' 00:22:55.102 13:42:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:55.102 13:42:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:22:55.102 13:42:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:55.102 13:42:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:22:55.102 13:42:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:22:55.102 13:42:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:55.102 13:42:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:55.102 13:42:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:55.102 13:42:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:55.102 13:42:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:22:55.102 13:42:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:55.102 13:42:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:55.102 13:42:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:55.102 13:42:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:55.102 13:42:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:55.102 13:42:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:55.102 13:42:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:55.102 13:42:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:55.102 13:42:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:55.102 13:42:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:55.102 "name": "raid_bdev1", 00:22:55.102 "uuid": "ce775387-f092-4f78-9530-9b58b43e3873", 00:22:55.102 "strip_size_kb": 0, 00:22:55.102 "state": "online", 00:22:55.102 "raid_level": "raid1", 00:22:55.102 "superblock": true, 00:22:55.102 "num_base_bdevs": 2, 00:22:55.102 "num_base_bdevs_discovered": 2, 00:22:55.102 "num_base_bdevs_operational": 2, 00:22:55.102 "base_bdevs_list": [ 00:22:55.102 { 00:22:55.102 "name": "spare", 00:22:55.102 "uuid": "62852a27-8e86-543b-8eea-e3e947386116", 00:22:55.102 "is_configured": true, 00:22:55.102 "data_offset": 256, 00:22:55.102 "data_size": 7936 00:22:55.102 }, 00:22:55.102 { 00:22:55.102 "name": "BaseBdev2", 00:22:55.102 "uuid": "a33e8ac8-716d-5e3b-8c11-afc28998c0f6", 00:22:55.102 "is_configured": true, 00:22:55.102 "data_offset": 256, 00:22:55.102 "data_size": 7936 00:22:55.102 } 00:22:55.102 ] 00:22:55.102 }' 00:22:55.102 13:42:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:55.102 13:42:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:55.670 13:42:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:22:55.670 13:42:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:55.670 13:42:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:55.670 [2024-11-20 13:42:54.882589] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:55.670 [2024-11-20 13:42:54.882627] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:55.670 [2024-11-20 13:42:54.882709] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:55.670 [2024-11-20 13:42:54.882773] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:55.670 [2024-11-20 13:42:54.882788] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:22:55.670 13:42:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:55.670 13:42:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:55.670 13:42:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:55.670 13:42:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:55.670 13:42:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # jq length 00:22:55.670 13:42:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:55.670 13:42:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:22:55.670 13:42:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:22:55.670 13:42:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:22:55.670 13:42:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:22:55.670 13:42:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:22:55.670 13:42:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:22:55.670 13:42:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:22:55.670 13:42:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:22:55.670 13:42:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:22:55.670 13:42:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:22:55.670 13:42:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:22:55.670 13:42:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:22:55.670 13:42:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:22:55.930 /dev/nbd0 00:22:55.930 13:42:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:22:55.930 13:42:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:22:55.930 13:42:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:22:55.930 13:42:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # local i 00:22:55.930 13:42:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:22:55.930 13:42:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:22:55.930 13:42:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:22:55.930 13:42:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@877 -- # break 00:22:55.930 13:42:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:22:55.930 13:42:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:22:55.930 13:42:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:55.930 1+0 records in 00:22:55.930 1+0 records out 00:22:55.930 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000389344 s, 10.5 MB/s 00:22:55.930 13:42:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:55.930 13:42:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # size=4096 00:22:55.930 13:42:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:55.930 13:42:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:22:55.930 13:42:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@893 -- # return 0 00:22:55.930 13:42:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:55.930 13:42:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:22:55.930 13:42:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:22:56.190 /dev/nbd1 00:22:56.190 13:42:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:22:56.190 13:42:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:22:56.190 13:42:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:22:56.190 13:42:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # local i 00:22:56.190 13:42:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:22:56.190 13:42:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:22:56.190 13:42:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:22:56.190 13:42:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@877 -- # break 00:22:56.190 13:42:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:22:56.190 13:42:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:22:56.190 13:42:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:56.190 1+0 records in 00:22:56.190 1+0 records out 00:22:56.190 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000325442 s, 12.6 MB/s 00:22:56.190 13:42:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:56.190 13:42:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # size=4096 00:22:56.190 13:42:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:56.190 13:42:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:22:56.190 13:42:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@893 -- # return 0 00:22:56.190 13:42:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:56.190 13:42:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:22:56.190 13:42:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:22:56.449 13:42:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:22:56.449 13:42:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:22:56.450 13:42:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:22:56.450 13:42:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:22:56.450 13:42:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:22:56.450 13:42:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:56.450 13:42:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:22:56.709 13:42:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:22:56.709 13:42:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:22:56.709 13:42:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:22:56.709 13:42:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:56.709 13:42:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:56.709 13:42:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:22:56.709 13:42:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:22:56.709 13:42:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:22:56.709 13:42:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:56.709 13:42:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:22:56.709 13:42:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:22:56.709 13:42:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:22:56.709 13:42:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:22:56.709 13:42:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:56.709 13:42:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:56.709 13:42:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:22:56.709 13:42:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:22:56.709 13:42:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:22:56.709 13:42:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:22:56.710 13:42:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:22:56.710 13:42:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:56.969 13:42:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:56.969 13:42:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:56.969 13:42:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:22:56.969 13:42:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:56.969 13:42:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:56.969 [2024-11-20 13:42:56.218106] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:22:56.969 [2024-11-20 13:42:56.218165] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:56.969 [2024-11-20 13:42:56.218192] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:22:56.969 [2024-11-20 13:42:56.218204] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:56.969 [2024-11-20 13:42:56.220688] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:56.969 [2024-11-20 13:42:56.220730] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:22:56.969 [2024-11-20 13:42:56.220824] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:22:56.969 [2024-11-20 13:42:56.220878] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:56.969 [2024-11-20 13:42:56.221031] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:56.969 spare 00:22:56.969 13:42:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:56.969 13:42:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:22:56.969 13:42:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:56.969 13:42:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:56.969 [2024-11-20 13:42:56.320984] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:22:56.969 [2024-11-20 13:42:56.321227] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:22:56.969 [2024-11-20 13:42:56.321633] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1b50 00:22:56.969 [2024-11-20 13:42:56.321947] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:22:56.969 [2024-11-20 13:42:56.322072] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:22:56.969 [2024-11-20 13:42:56.322436] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:56.969 13:42:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:56.970 13:42:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:22:56.970 13:42:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:56.970 13:42:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:56.970 13:42:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:56.970 13:42:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:56.970 13:42:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:22:56.970 13:42:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:56.970 13:42:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:56.970 13:42:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:56.970 13:42:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:56.970 13:42:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:56.970 13:42:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:56.970 13:42:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:56.970 13:42:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:56.970 13:42:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:56.970 13:42:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:56.970 "name": "raid_bdev1", 00:22:56.970 "uuid": "ce775387-f092-4f78-9530-9b58b43e3873", 00:22:56.970 "strip_size_kb": 0, 00:22:56.970 "state": "online", 00:22:56.970 "raid_level": "raid1", 00:22:56.970 "superblock": true, 00:22:56.970 "num_base_bdevs": 2, 00:22:56.970 "num_base_bdevs_discovered": 2, 00:22:56.970 "num_base_bdevs_operational": 2, 00:22:56.970 "base_bdevs_list": [ 00:22:56.970 { 00:22:56.970 "name": "spare", 00:22:56.970 "uuid": "62852a27-8e86-543b-8eea-e3e947386116", 00:22:56.970 "is_configured": true, 00:22:56.970 "data_offset": 256, 00:22:56.970 "data_size": 7936 00:22:56.970 }, 00:22:56.970 { 00:22:56.970 "name": "BaseBdev2", 00:22:56.970 "uuid": "a33e8ac8-716d-5e3b-8c11-afc28998c0f6", 00:22:56.970 "is_configured": true, 00:22:56.970 "data_offset": 256, 00:22:56.970 "data_size": 7936 00:22:56.970 } 00:22:56.970 ] 00:22:56.970 }' 00:22:56.970 13:42:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:56.970 13:42:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:57.539 13:42:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:57.539 13:42:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:57.539 13:42:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:22:57.539 13:42:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:22:57.539 13:42:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:57.539 13:42:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:57.539 13:42:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:57.539 13:42:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:57.539 13:42:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:57.539 13:42:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:57.539 13:42:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:57.539 "name": "raid_bdev1", 00:22:57.539 "uuid": "ce775387-f092-4f78-9530-9b58b43e3873", 00:22:57.539 "strip_size_kb": 0, 00:22:57.539 "state": "online", 00:22:57.539 "raid_level": "raid1", 00:22:57.539 "superblock": true, 00:22:57.539 "num_base_bdevs": 2, 00:22:57.539 "num_base_bdevs_discovered": 2, 00:22:57.539 "num_base_bdevs_operational": 2, 00:22:57.539 "base_bdevs_list": [ 00:22:57.539 { 00:22:57.539 "name": "spare", 00:22:57.539 "uuid": "62852a27-8e86-543b-8eea-e3e947386116", 00:22:57.539 "is_configured": true, 00:22:57.539 "data_offset": 256, 00:22:57.539 "data_size": 7936 00:22:57.539 }, 00:22:57.539 { 00:22:57.539 "name": "BaseBdev2", 00:22:57.539 "uuid": "a33e8ac8-716d-5e3b-8c11-afc28998c0f6", 00:22:57.539 "is_configured": true, 00:22:57.539 "data_offset": 256, 00:22:57.539 "data_size": 7936 00:22:57.539 } 00:22:57.539 ] 00:22:57.539 }' 00:22:57.539 13:42:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:57.539 13:42:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:22:57.539 13:42:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:57.539 13:42:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:22:57.539 13:42:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:57.539 13:42:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:22:57.539 13:42:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:57.539 13:42:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:57.539 13:42:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:57.539 13:42:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:22:57.539 13:42:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:22:57.539 13:42:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:57.539 13:42:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:57.539 [2024-11-20 13:42:56.913510] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:57.539 13:42:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:57.539 13:42:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:57.539 13:42:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:57.539 13:42:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:57.539 13:42:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:57.539 13:42:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:57.539 13:42:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:22:57.539 13:42:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:57.539 13:42:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:57.539 13:42:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:57.539 13:42:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:57.539 13:42:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:57.539 13:42:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:57.539 13:42:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:57.539 13:42:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:57.539 13:42:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:57.539 13:42:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:57.539 "name": "raid_bdev1", 00:22:57.539 "uuid": "ce775387-f092-4f78-9530-9b58b43e3873", 00:22:57.539 "strip_size_kb": 0, 00:22:57.539 "state": "online", 00:22:57.539 "raid_level": "raid1", 00:22:57.539 "superblock": true, 00:22:57.539 "num_base_bdevs": 2, 00:22:57.539 "num_base_bdevs_discovered": 1, 00:22:57.539 "num_base_bdevs_operational": 1, 00:22:57.540 "base_bdevs_list": [ 00:22:57.540 { 00:22:57.540 "name": null, 00:22:57.540 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:57.540 "is_configured": false, 00:22:57.540 "data_offset": 0, 00:22:57.540 "data_size": 7936 00:22:57.540 }, 00:22:57.540 { 00:22:57.540 "name": "BaseBdev2", 00:22:57.540 "uuid": "a33e8ac8-716d-5e3b-8c11-afc28998c0f6", 00:22:57.540 "is_configured": true, 00:22:57.540 "data_offset": 256, 00:22:57.540 "data_size": 7936 00:22:57.540 } 00:22:57.540 ] 00:22:57.540 }' 00:22:57.540 13:42:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:57.540 13:42:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:58.108 13:42:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:22:58.108 13:42:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:58.108 13:42:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:58.108 [2024-11-20 13:42:57.400884] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:58.108 [2024-11-20 13:42:57.401249] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:22:58.108 [2024-11-20 13:42:57.401390] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:22:58.108 [2024-11-20 13:42:57.401443] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:58.108 [2024-11-20 13:42:57.418210] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1c20 00:22:58.108 13:42:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:58.108 13:42:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@757 -- # sleep 1 00:22:58.108 [2024-11-20 13:42:57.420355] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:59.085 13:42:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:59.085 13:42:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:59.085 13:42:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:59.085 13:42:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:59.085 13:42:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:59.085 13:42:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:59.085 13:42:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:59.085 13:42:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:59.085 13:42:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:59.085 13:42:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:59.085 13:42:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:59.085 "name": "raid_bdev1", 00:22:59.085 "uuid": "ce775387-f092-4f78-9530-9b58b43e3873", 00:22:59.085 "strip_size_kb": 0, 00:22:59.085 "state": "online", 00:22:59.085 "raid_level": "raid1", 00:22:59.085 "superblock": true, 00:22:59.085 "num_base_bdevs": 2, 00:22:59.085 "num_base_bdevs_discovered": 2, 00:22:59.085 "num_base_bdevs_operational": 2, 00:22:59.085 "process": { 00:22:59.085 "type": "rebuild", 00:22:59.085 "target": "spare", 00:22:59.085 "progress": { 00:22:59.085 "blocks": 2560, 00:22:59.085 "percent": 32 00:22:59.085 } 00:22:59.085 }, 00:22:59.085 "base_bdevs_list": [ 00:22:59.085 { 00:22:59.085 "name": "spare", 00:22:59.085 "uuid": "62852a27-8e86-543b-8eea-e3e947386116", 00:22:59.085 "is_configured": true, 00:22:59.085 "data_offset": 256, 00:22:59.085 "data_size": 7936 00:22:59.085 }, 00:22:59.085 { 00:22:59.085 "name": "BaseBdev2", 00:22:59.085 "uuid": "a33e8ac8-716d-5e3b-8c11-afc28998c0f6", 00:22:59.085 "is_configured": true, 00:22:59.085 "data_offset": 256, 00:22:59.085 "data_size": 7936 00:22:59.085 } 00:22:59.085 ] 00:22:59.085 }' 00:22:59.085 13:42:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:59.085 13:42:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:59.085 13:42:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:59.085 13:42:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:22:59.085 13:42:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:22:59.085 13:42:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:59.085 13:42:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:59.345 [2024-11-20 13:42:58.575922] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:59.345 [2024-11-20 13:42:58.625945] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:22:59.345 [2024-11-20 13:42:58.626019] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:59.345 [2024-11-20 13:42:58.626036] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:59.345 [2024-11-20 13:42:58.626047] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:22:59.345 13:42:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:59.345 13:42:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:59.345 13:42:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:59.345 13:42:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:59.345 13:42:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:59.345 13:42:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:59.345 13:42:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:22:59.345 13:42:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:59.345 13:42:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:59.345 13:42:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:59.345 13:42:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:59.345 13:42:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:59.345 13:42:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:59.345 13:42:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:59.345 13:42:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:59.345 13:42:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:59.345 13:42:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:59.345 "name": "raid_bdev1", 00:22:59.345 "uuid": "ce775387-f092-4f78-9530-9b58b43e3873", 00:22:59.345 "strip_size_kb": 0, 00:22:59.345 "state": "online", 00:22:59.345 "raid_level": "raid1", 00:22:59.345 "superblock": true, 00:22:59.345 "num_base_bdevs": 2, 00:22:59.345 "num_base_bdevs_discovered": 1, 00:22:59.345 "num_base_bdevs_operational": 1, 00:22:59.345 "base_bdevs_list": [ 00:22:59.345 { 00:22:59.345 "name": null, 00:22:59.345 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:59.345 "is_configured": false, 00:22:59.345 "data_offset": 0, 00:22:59.345 "data_size": 7936 00:22:59.345 }, 00:22:59.345 { 00:22:59.345 "name": "BaseBdev2", 00:22:59.345 "uuid": "a33e8ac8-716d-5e3b-8c11-afc28998c0f6", 00:22:59.345 "is_configured": true, 00:22:59.345 "data_offset": 256, 00:22:59.345 "data_size": 7936 00:22:59.345 } 00:22:59.345 ] 00:22:59.345 }' 00:22:59.345 13:42:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:59.345 13:42:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:59.935 13:42:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:22:59.935 13:42:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:59.935 13:42:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:59.935 [2024-11-20 13:42:59.110266] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:22:59.935 [2024-11-20 13:42:59.110344] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:59.935 [2024-11-20 13:42:59.110368] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:22:59.935 [2024-11-20 13:42:59.110382] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:59.935 [2024-11-20 13:42:59.110845] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:59.935 [2024-11-20 13:42:59.110875] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:22:59.935 [2024-11-20 13:42:59.110970] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:22:59.935 [2024-11-20 13:42:59.110988] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:22:59.935 [2024-11-20 13:42:59.110999] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:22:59.935 [2024-11-20 13:42:59.111027] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:59.935 spare 00:22:59.935 [2024-11-20 13:42:59.126569] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1cf0 00:22:59.935 13:42:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:59.935 13:42:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@764 -- # sleep 1 00:22:59.935 [2024-11-20 13:42:59.128665] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:23:00.876 13:43:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:00.876 13:43:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:00.876 13:43:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:23:00.876 13:43:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:23:00.876 13:43:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:00.876 13:43:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:00.876 13:43:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:00.876 13:43:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:00.876 13:43:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:00.876 13:43:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:00.876 13:43:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:00.876 "name": "raid_bdev1", 00:23:00.876 "uuid": "ce775387-f092-4f78-9530-9b58b43e3873", 00:23:00.876 "strip_size_kb": 0, 00:23:00.876 "state": "online", 00:23:00.876 "raid_level": "raid1", 00:23:00.876 "superblock": true, 00:23:00.876 "num_base_bdevs": 2, 00:23:00.876 "num_base_bdevs_discovered": 2, 00:23:00.876 "num_base_bdevs_operational": 2, 00:23:00.876 "process": { 00:23:00.876 "type": "rebuild", 00:23:00.876 "target": "spare", 00:23:00.876 "progress": { 00:23:00.876 "blocks": 2560, 00:23:00.876 "percent": 32 00:23:00.876 } 00:23:00.876 }, 00:23:00.876 "base_bdevs_list": [ 00:23:00.876 { 00:23:00.876 "name": "spare", 00:23:00.876 "uuid": "62852a27-8e86-543b-8eea-e3e947386116", 00:23:00.876 "is_configured": true, 00:23:00.876 "data_offset": 256, 00:23:00.876 "data_size": 7936 00:23:00.876 }, 00:23:00.876 { 00:23:00.876 "name": "BaseBdev2", 00:23:00.877 "uuid": "a33e8ac8-716d-5e3b-8c11-afc28998c0f6", 00:23:00.877 "is_configured": true, 00:23:00.877 "data_offset": 256, 00:23:00.877 "data_size": 7936 00:23:00.877 } 00:23:00.877 ] 00:23:00.877 }' 00:23:00.877 13:43:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:00.877 13:43:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:00.877 13:43:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:00.877 13:43:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:23:00.877 13:43:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:23:00.877 13:43:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:00.877 13:43:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:00.877 [2024-11-20 13:43:00.248475] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:00.877 [2024-11-20 13:43:00.333917] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:23:00.877 [2024-11-20 13:43:00.334209] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:00.877 [2024-11-20 13:43:00.334235] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:00.877 [2024-11-20 13:43:00.334247] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:23:01.136 13:43:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:01.136 13:43:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:01.136 13:43:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:01.136 13:43:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:01.136 13:43:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:01.136 13:43:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:01.136 13:43:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:23:01.136 13:43:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:01.136 13:43:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:01.136 13:43:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:01.136 13:43:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:01.136 13:43:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:01.136 13:43:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:01.136 13:43:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:01.136 13:43:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:01.136 13:43:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:01.136 13:43:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:01.136 "name": "raid_bdev1", 00:23:01.136 "uuid": "ce775387-f092-4f78-9530-9b58b43e3873", 00:23:01.136 "strip_size_kb": 0, 00:23:01.136 "state": "online", 00:23:01.136 "raid_level": "raid1", 00:23:01.136 "superblock": true, 00:23:01.136 "num_base_bdevs": 2, 00:23:01.136 "num_base_bdevs_discovered": 1, 00:23:01.136 "num_base_bdevs_operational": 1, 00:23:01.136 "base_bdevs_list": [ 00:23:01.136 { 00:23:01.136 "name": null, 00:23:01.136 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:01.136 "is_configured": false, 00:23:01.136 "data_offset": 0, 00:23:01.136 "data_size": 7936 00:23:01.136 }, 00:23:01.136 { 00:23:01.136 "name": "BaseBdev2", 00:23:01.136 "uuid": "a33e8ac8-716d-5e3b-8c11-afc28998c0f6", 00:23:01.136 "is_configured": true, 00:23:01.136 "data_offset": 256, 00:23:01.136 "data_size": 7936 00:23:01.136 } 00:23:01.136 ] 00:23:01.136 }' 00:23:01.136 13:43:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:01.136 13:43:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:01.396 13:43:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:01.396 13:43:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:01.396 13:43:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:23:01.396 13:43:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:23:01.396 13:43:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:01.396 13:43:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:01.396 13:43:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:01.396 13:43:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:01.396 13:43:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:01.396 13:43:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:01.396 13:43:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:01.396 "name": "raid_bdev1", 00:23:01.396 "uuid": "ce775387-f092-4f78-9530-9b58b43e3873", 00:23:01.396 "strip_size_kb": 0, 00:23:01.396 "state": "online", 00:23:01.396 "raid_level": "raid1", 00:23:01.396 "superblock": true, 00:23:01.396 "num_base_bdevs": 2, 00:23:01.396 "num_base_bdevs_discovered": 1, 00:23:01.396 "num_base_bdevs_operational": 1, 00:23:01.396 "base_bdevs_list": [ 00:23:01.396 { 00:23:01.396 "name": null, 00:23:01.396 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:01.396 "is_configured": false, 00:23:01.396 "data_offset": 0, 00:23:01.396 "data_size": 7936 00:23:01.396 }, 00:23:01.396 { 00:23:01.396 "name": "BaseBdev2", 00:23:01.396 "uuid": "a33e8ac8-716d-5e3b-8c11-afc28998c0f6", 00:23:01.396 "is_configured": true, 00:23:01.396 "data_offset": 256, 00:23:01.396 "data_size": 7936 00:23:01.396 } 00:23:01.396 ] 00:23:01.396 }' 00:23:01.396 13:43:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:01.396 13:43:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:23:01.396 13:43:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:01.655 13:43:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:23:01.655 13:43:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:23:01.655 13:43:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:01.655 13:43:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:01.655 13:43:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:01.655 13:43:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:23:01.655 13:43:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:01.655 13:43:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:01.655 [2024-11-20 13:43:00.927071] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:23:01.655 [2024-11-20 13:43:00.927250] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:01.655 [2024-11-20 13:43:00.927361] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:23:01.655 [2024-11-20 13:43:00.927451] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:01.655 [2024-11-20 13:43:00.927940] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:01.655 [2024-11-20 13:43:00.928082] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:23:01.655 [2024-11-20 13:43:00.928281] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:23:01.655 [2024-11-20 13:43:00.928305] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:23:01.655 [2024-11-20 13:43:00.928321] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:23:01.655 [2024-11-20 13:43:00.928333] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:23:01.655 BaseBdev1 00:23:01.655 13:43:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:01.655 13:43:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@775 -- # sleep 1 00:23:02.593 13:43:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:02.593 13:43:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:02.593 13:43:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:02.593 13:43:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:02.593 13:43:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:02.593 13:43:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:23:02.593 13:43:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:02.593 13:43:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:02.593 13:43:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:02.593 13:43:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:02.593 13:43:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:02.593 13:43:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:02.593 13:43:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:02.593 13:43:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:02.593 13:43:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:02.593 13:43:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:02.593 "name": "raid_bdev1", 00:23:02.593 "uuid": "ce775387-f092-4f78-9530-9b58b43e3873", 00:23:02.593 "strip_size_kb": 0, 00:23:02.593 "state": "online", 00:23:02.593 "raid_level": "raid1", 00:23:02.593 "superblock": true, 00:23:02.593 "num_base_bdevs": 2, 00:23:02.593 "num_base_bdevs_discovered": 1, 00:23:02.593 "num_base_bdevs_operational": 1, 00:23:02.593 "base_bdevs_list": [ 00:23:02.593 { 00:23:02.593 "name": null, 00:23:02.593 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:02.593 "is_configured": false, 00:23:02.593 "data_offset": 0, 00:23:02.593 "data_size": 7936 00:23:02.593 }, 00:23:02.593 { 00:23:02.593 "name": "BaseBdev2", 00:23:02.593 "uuid": "a33e8ac8-716d-5e3b-8c11-afc28998c0f6", 00:23:02.593 "is_configured": true, 00:23:02.593 "data_offset": 256, 00:23:02.593 "data_size": 7936 00:23:02.593 } 00:23:02.593 ] 00:23:02.593 }' 00:23:02.593 13:43:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:02.593 13:43:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:03.161 13:43:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:03.161 13:43:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:03.161 13:43:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:23:03.161 13:43:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:23:03.162 13:43:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:03.162 13:43:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:03.162 13:43:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:03.162 13:43:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:03.162 13:43:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:03.162 13:43:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:03.162 13:43:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:03.162 "name": "raid_bdev1", 00:23:03.162 "uuid": "ce775387-f092-4f78-9530-9b58b43e3873", 00:23:03.162 "strip_size_kb": 0, 00:23:03.162 "state": "online", 00:23:03.162 "raid_level": "raid1", 00:23:03.162 "superblock": true, 00:23:03.162 "num_base_bdevs": 2, 00:23:03.162 "num_base_bdevs_discovered": 1, 00:23:03.162 "num_base_bdevs_operational": 1, 00:23:03.162 "base_bdevs_list": [ 00:23:03.162 { 00:23:03.162 "name": null, 00:23:03.162 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:03.162 "is_configured": false, 00:23:03.162 "data_offset": 0, 00:23:03.162 "data_size": 7936 00:23:03.162 }, 00:23:03.162 { 00:23:03.162 "name": "BaseBdev2", 00:23:03.162 "uuid": "a33e8ac8-716d-5e3b-8c11-afc28998c0f6", 00:23:03.162 "is_configured": true, 00:23:03.162 "data_offset": 256, 00:23:03.162 "data_size": 7936 00:23:03.162 } 00:23:03.162 ] 00:23:03.162 }' 00:23:03.162 13:43:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:03.162 13:43:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:23:03.162 13:43:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:03.162 13:43:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:23:03.162 13:43:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:23:03.162 13:43:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@652 -- # local es=0 00:23:03.162 13:43:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:23:03.162 13:43:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:23:03.162 13:43:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:03.162 13:43:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:23:03.162 13:43:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:03.162 13:43:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:23:03.162 13:43:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:03.162 13:43:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:03.162 [2024-11-20 13:43:02.521770] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:03.162 [2024-11-20 13:43:02.522091] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:23:03.162 [2024-11-20 13:43:02.522222] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:23:03.162 request: 00:23:03.162 { 00:23:03.162 "base_bdev": "BaseBdev1", 00:23:03.162 "raid_bdev": "raid_bdev1", 00:23:03.162 "method": "bdev_raid_add_base_bdev", 00:23:03.162 "req_id": 1 00:23:03.162 } 00:23:03.162 Got JSON-RPC error response 00:23:03.162 response: 00:23:03.162 { 00:23:03.162 "code": -22, 00:23:03.162 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:23:03.162 } 00:23:03.162 13:43:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:23:03.162 13:43:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@655 -- # es=1 00:23:03.162 13:43:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:03.162 13:43:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:03.162 13:43:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:03.162 13:43:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@779 -- # sleep 1 00:23:04.201 13:43:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:04.201 13:43:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:04.201 13:43:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:04.201 13:43:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:04.201 13:43:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:04.201 13:43:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:23:04.201 13:43:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:04.201 13:43:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:04.201 13:43:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:04.201 13:43:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:04.201 13:43:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:04.201 13:43:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:04.201 13:43:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:04.201 13:43:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:04.201 13:43:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:04.201 13:43:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:04.201 "name": "raid_bdev1", 00:23:04.201 "uuid": "ce775387-f092-4f78-9530-9b58b43e3873", 00:23:04.201 "strip_size_kb": 0, 00:23:04.201 "state": "online", 00:23:04.201 "raid_level": "raid1", 00:23:04.201 "superblock": true, 00:23:04.201 "num_base_bdevs": 2, 00:23:04.201 "num_base_bdevs_discovered": 1, 00:23:04.201 "num_base_bdevs_operational": 1, 00:23:04.201 "base_bdevs_list": [ 00:23:04.201 { 00:23:04.201 "name": null, 00:23:04.201 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:04.201 "is_configured": false, 00:23:04.201 "data_offset": 0, 00:23:04.201 "data_size": 7936 00:23:04.201 }, 00:23:04.201 { 00:23:04.201 "name": "BaseBdev2", 00:23:04.201 "uuid": "a33e8ac8-716d-5e3b-8c11-afc28998c0f6", 00:23:04.201 "is_configured": true, 00:23:04.201 "data_offset": 256, 00:23:04.201 "data_size": 7936 00:23:04.201 } 00:23:04.201 ] 00:23:04.201 }' 00:23:04.201 13:43:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:04.201 13:43:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:04.767 13:43:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:04.767 13:43:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:04.767 13:43:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:23:04.767 13:43:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:23:04.767 13:43:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:04.767 13:43:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:04.767 13:43:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:04.767 13:43:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:04.767 13:43:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:04.767 13:43:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:04.767 13:43:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:04.767 "name": "raid_bdev1", 00:23:04.767 "uuid": "ce775387-f092-4f78-9530-9b58b43e3873", 00:23:04.767 "strip_size_kb": 0, 00:23:04.767 "state": "online", 00:23:04.767 "raid_level": "raid1", 00:23:04.767 "superblock": true, 00:23:04.767 "num_base_bdevs": 2, 00:23:04.767 "num_base_bdevs_discovered": 1, 00:23:04.767 "num_base_bdevs_operational": 1, 00:23:04.767 "base_bdevs_list": [ 00:23:04.767 { 00:23:04.767 "name": null, 00:23:04.767 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:04.767 "is_configured": false, 00:23:04.767 "data_offset": 0, 00:23:04.767 "data_size": 7936 00:23:04.767 }, 00:23:04.767 { 00:23:04.767 "name": "BaseBdev2", 00:23:04.767 "uuid": "a33e8ac8-716d-5e3b-8c11-afc28998c0f6", 00:23:04.767 "is_configured": true, 00:23:04.767 "data_offset": 256, 00:23:04.767 "data_size": 7936 00:23:04.767 } 00:23:04.767 ] 00:23:04.767 }' 00:23:04.767 13:43:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:04.767 13:43:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:23:04.767 13:43:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:04.767 13:43:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:23:04.767 13:43:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@784 -- # killprocess 86314 00:23:04.767 13:43:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@954 -- # '[' -z 86314 ']' 00:23:04.767 13:43:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@958 -- # kill -0 86314 00:23:04.767 13:43:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@959 -- # uname 00:23:04.767 13:43:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:04.767 13:43:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86314 00:23:04.767 killing process with pid 86314 00:23:04.767 Received shutdown signal, test time was about 60.000000 seconds 00:23:04.767 00:23:04.767 Latency(us) 00:23:04.767 [2024-11-20T13:43:04.252Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:04.767 [2024-11-20T13:43:04.252Z] =================================================================================================================== 00:23:04.767 [2024-11-20T13:43:04.252Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:04.767 13:43:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:04.767 13:43:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:04.767 13:43:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86314' 00:23:04.767 13:43:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@973 -- # kill 86314 00:23:04.767 [2024-11-20 13:43:04.129568] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:04.767 13:43:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@978 -- # wait 86314 00:23:04.767 [2024-11-20 13:43:04.129697] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:04.767 [2024-11-20 13:43:04.129745] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:04.767 [2024-11-20 13:43:04.129759] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:23:05.026 [2024-11-20 13:43:04.430717] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:06.401 13:43:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@786 -- # return 0 00:23:06.401 00:23:06.401 real 0m19.983s 00:23:06.401 user 0m25.907s 00:23:06.401 sys 0m2.899s 00:23:06.401 13:43:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:06.401 13:43:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:06.401 ************************************ 00:23:06.401 END TEST raid_rebuild_test_sb_4k 00:23:06.401 ************************************ 00:23:06.401 13:43:05 bdev_raid -- bdev/bdev_raid.sh@1003 -- # base_malloc_params='-m 32' 00:23:06.401 13:43:05 bdev_raid -- bdev/bdev_raid.sh@1004 -- # run_test raid_state_function_test_sb_md_separate raid_state_function_test raid1 2 true 00:23:06.401 13:43:05 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:23:06.401 13:43:05 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:06.401 13:43:05 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:23:06.401 ************************************ 00:23:06.401 START TEST raid_state_function_test_sb_md_separate 00:23:06.401 ************************************ 00:23:06.401 13:43:05 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:23:06.401 13:43:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:23:06.401 13:43:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:23:06.401 13:43:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:23:06.401 13:43:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:23:06.401 13:43:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:23:06.401 13:43:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:23:06.401 13:43:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:23:06.401 13:43:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:23:06.401 13:43:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:23:06.401 13:43:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:23:06.401 13:43:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:23:06.401 13:43:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:23:06.401 13:43:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:23:06.401 13:43:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:23:06.401 13:43:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:23:06.401 13:43:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # local strip_size 00:23:06.401 13:43:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:23:06.401 13:43:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:23:06.401 13:43:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:23:06.401 13:43:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:23:06.401 13:43:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:23:06.401 13:43:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:23:06.401 13:43:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@229 -- # raid_pid=87004 00:23:06.401 Process raid pid: 87004 00:23:06.401 13:43:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:23:06.401 13:43:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 87004' 00:23:06.401 13:43:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@231 -- # waitforlisten 87004 00:23:06.401 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:06.401 13:43:05 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@835 -- # '[' -z 87004 ']' 00:23:06.401 13:43:05 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:06.401 13:43:05 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:06.401 13:43:05 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:06.401 13:43:05 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:06.401 13:43:05 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:06.401 [2024-11-20 13:43:05.720421] Starting SPDK v25.01-pre git sha1 82b85d9ca / DPDK 24.03.0 initialization... 00:23:06.401 [2024-11-20 13:43:05.720764] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:06.661 [2024-11-20 13:43:05.899902] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:06.661 [2024-11-20 13:43:06.015047] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:06.919 [2024-11-20 13:43:06.230352] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:06.919 [2024-11-20 13:43:06.230599] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:07.178 13:43:06 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:07.178 13:43:06 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@868 -- # return 0 00:23:07.178 13:43:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:23:07.178 13:43:06 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:07.178 13:43:06 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:07.178 [2024-11-20 13:43:06.551263] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:23:07.178 [2024-11-20 13:43:06.551444] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:23:07.178 [2024-11-20 13:43:06.551580] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:23:07.178 [2024-11-20 13:43:06.551630] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:23:07.178 13:43:06 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:07.178 13:43:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:23:07.178 13:43:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:07.178 13:43:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:07.178 13:43:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:07.178 13:43:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:07.178 13:43:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:23:07.178 13:43:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:07.178 13:43:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:07.178 13:43:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:07.178 13:43:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:07.178 13:43:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:07.178 13:43:06 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:07.178 13:43:06 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:07.178 13:43:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:07.178 13:43:06 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:07.178 13:43:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:07.178 "name": "Existed_Raid", 00:23:07.178 "uuid": "6d9aabcf-498c-42c5-8f27-29e3e3726da3", 00:23:07.178 "strip_size_kb": 0, 00:23:07.178 "state": "configuring", 00:23:07.178 "raid_level": "raid1", 00:23:07.178 "superblock": true, 00:23:07.178 "num_base_bdevs": 2, 00:23:07.178 "num_base_bdevs_discovered": 0, 00:23:07.178 "num_base_bdevs_operational": 2, 00:23:07.178 "base_bdevs_list": [ 00:23:07.178 { 00:23:07.178 "name": "BaseBdev1", 00:23:07.178 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:07.178 "is_configured": false, 00:23:07.178 "data_offset": 0, 00:23:07.178 "data_size": 0 00:23:07.178 }, 00:23:07.178 { 00:23:07.178 "name": "BaseBdev2", 00:23:07.178 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:07.178 "is_configured": false, 00:23:07.178 "data_offset": 0, 00:23:07.178 "data_size": 0 00:23:07.179 } 00:23:07.179 ] 00:23:07.179 }' 00:23:07.179 13:43:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:07.179 13:43:06 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:07.748 13:43:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:23:07.748 13:43:06 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:07.748 13:43:06 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:07.748 [2024-11-20 13:43:06.978588] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:23:07.748 [2024-11-20 13:43:06.978624] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:23:07.748 13:43:06 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:07.748 13:43:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:23:07.748 13:43:06 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:07.748 13:43:06 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:07.748 [2024-11-20 13:43:06.990556] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:23:07.748 [2024-11-20 13:43:06.990723] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:23:07.748 [2024-11-20 13:43:06.990806] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:23:07.748 [2024-11-20 13:43:06.990833] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:23:07.748 13:43:06 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:07.748 13:43:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1 00:23:07.748 13:43:06 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:07.748 13:43:06 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:07.748 [2024-11-20 13:43:07.040339] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:07.748 BaseBdev1 00:23:07.748 13:43:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:07.748 13:43:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:23:07.748 13:43:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:23:07.748 13:43:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:23:07.748 13:43:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@905 -- # local i 00:23:07.748 13:43:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:23:07.748 13:43:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:23:07.748 13:43:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:23:07.748 13:43:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:07.748 13:43:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:07.748 13:43:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:07.748 13:43:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:23:07.748 13:43:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:07.748 13:43:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:07.748 [ 00:23:07.748 { 00:23:07.748 "name": "BaseBdev1", 00:23:07.748 "aliases": [ 00:23:07.748 "b9bca747-58ee-410a-8a11-fba098d93d9e" 00:23:07.748 ], 00:23:07.748 "product_name": "Malloc disk", 00:23:07.748 "block_size": 4096, 00:23:07.748 "num_blocks": 8192, 00:23:07.748 "uuid": "b9bca747-58ee-410a-8a11-fba098d93d9e", 00:23:07.748 "md_size": 32, 00:23:07.748 "md_interleave": false, 00:23:07.748 "dif_type": 0, 00:23:07.748 "assigned_rate_limits": { 00:23:07.748 "rw_ios_per_sec": 0, 00:23:07.748 "rw_mbytes_per_sec": 0, 00:23:07.748 "r_mbytes_per_sec": 0, 00:23:07.748 "w_mbytes_per_sec": 0 00:23:07.748 }, 00:23:07.748 "claimed": true, 00:23:07.748 "claim_type": "exclusive_write", 00:23:07.748 "zoned": false, 00:23:07.748 "supported_io_types": { 00:23:07.748 "read": true, 00:23:07.748 "write": true, 00:23:07.748 "unmap": true, 00:23:07.748 "flush": true, 00:23:07.748 "reset": true, 00:23:07.748 "nvme_admin": false, 00:23:07.748 "nvme_io": false, 00:23:07.748 "nvme_io_md": false, 00:23:07.748 "write_zeroes": true, 00:23:07.748 "zcopy": true, 00:23:07.748 "get_zone_info": false, 00:23:07.748 "zone_management": false, 00:23:07.748 "zone_append": false, 00:23:07.748 "compare": false, 00:23:07.748 "compare_and_write": false, 00:23:07.748 "abort": true, 00:23:07.748 "seek_hole": false, 00:23:07.748 "seek_data": false, 00:23:07.748 "copy": true, 00:23:07.748 "nvme_iov_md": false 00:23:07.748 }, 00:23:07.748 "memory_domains": [ 00:23:07.748 { 00:23:07.748 "dma_device_id": "system", 00:23:07.748 "dma_device_type": 1 00:23:07.748 }, 00:23:07.748 { 00:23:07.748 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:07.748 "dma_device_type": 2 00:23:07.748 } 00:23:07.748 ], 00:23:07.748 "driver_specific": {} 00:23:07.748 } 00:23:07.748 ] 00:23:07.748 13:43:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:07.748 13:43:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@911 -- # return 0 00:23:07.748 13:43:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:23:07.748 13:43:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:07.748 13:43:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:07.748 13:43:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:07.748 13:43:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:07.748 13:43:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:23:07.748 13:43:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:07.748 13:43:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:07.748 13:43:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:07.748 13:43:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:07.748 13:43:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:07.748 13:43:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:07.748 13:43:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:07.748 13:43:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:07.748 13:43:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:07.748 13:43:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:07.748 "name": "Existed_Raid", 00:23:07.748 "uuid": "5a1754e7-f5bc-4082-80f7-74b86b89a8a9", 00:23:07.748 "strip_size_kb": 0, 00:23:07.748 "state": "configuring", 00:23:07.748 "raid_level": "raid1", 00:23:07.748 "superblock": true, 00:23:07.748 "num_base_bdevs": 2, 00:23:07.748 "num_base_bdevs_discovered": 1, 00:23:07.748 "num_base_bdevs_operational": 2, 00:23:07.748 "base_bdevs_list": [ 00:23:07.748 { 00:23:07.748 "name": "BaseBdev1", 00:23:07.748 "uuid": "b9bca747-58ee-410a-8a11-fba098d93d9e", 00:23:07.748 "is_configured": true, 00:23:07.748 "data_offset": 256, 00:23:07.748 "data_size": 7936 00:23:07.748 }, 00:23:07.748 { 00:23:07.748 "name": "BaseBdev2", 00:23:07.748 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:07.748 "is_configured": false, 00:23:07.748 "data_offset": 0, 00:23:07.748 "data_size": 0 00:23:07.748 } 00:23:07.748 ] 00:23:07.748 }' 00:23:07.748 13:43:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:07.749 13:43:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:08.009 13:43:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:23:08.009 13:43:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:08.009 13:43:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:08.009 [2024-11-20 13:43:07.480192] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:23:08.009 [2024-11-20 13:43:07.480243] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:23:08.009 13:43:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:08.009 13:43:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:23:08.009 13:43:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:08.009 13:43:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:08.009 [2024-11-20 13:43:07.492231] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:08.268 [2024-11-20 13:43:07.494500] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:23:08.268 [2024-11-20 13:43:07.494654] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:23:08.268 13:43:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:08.268 13:43:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:23:08.268 13:43:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:23:08.268 13:43:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:23:08.268 13:43:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:08.268 13:43:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:08.268 13:43:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:08.268 13:43:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:08.268 13:43:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:23:08.268 13:43:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:08.268 13:43:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:08.268 13:43:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:08.268 13:43:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:08.268 13:43:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:08.268 13:43:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:08.268 13:43:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:08.268 13:43:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:08.268 13:43:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:08.268 13:43:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:08.268 "name": "Existed_Raid", 00:23:08.268 "uuid": "4ddedee0-ca6a-4258-8d91-6e674900ef69", 00:23:08.268 "strip_size_kb": 0, 00:23:08.269 "state": "configuring", 00:23:08.269 "raid_level": "raid1", 00:23:08.269 "superblock": true, 00:23:08.269 "num_base_bdevs": 2, 00:23:08.269 "num_base_bdevs_discovered": 1, 00:23:08.269 "num_base_bdevs_operational": 2, 00:23:08.269 "base_bdevs_list": [ 00:23:08.269 { 00:23:08.269 "name": "BaseBdev1", 00:23:08.269 "uuid": "b9bca747-58ee-410a-8a11-fba098d93d9e", 00:23:08.269 "is_configured": true, 00:23:08.269 "data_offset": 256, 00:23:08.269 "data_size": 7936 00:23:08.269 }, 00:23:08.269 { 00:23:08.269 "name": "BaseBdev2", 00:23:08.269 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:08.269 "is_configured": false, 00:23:08.269 "data_offset": 0, 00:23:08.269 "data_size": 0 00:23:08.269 } 00:23:08.269 ] 00:23:08.269 }' 00:23:08.269 13:43:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:08.269 13:43:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:08.529 13:43:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2 00:23:08.529 13:43:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:08.529 13:43:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:08.529 [2024-11-20 13:43:07.943128] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:08.529 [2024-11-20 13:43:07.943584] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:23:08.529 [2024-11-20 13:43:07.943609] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:23:08.529 [2024-11-20 13:43:07.943695] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:23:08.529 [2024-11-20 13:43:07.943825] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:23:08.529 [2024-11-20 13:43:07.943839] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:23:08.529 [2024-11-20 13:43:07.943932] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:08.529 BaseBdev2 00:23:08.529 13:43:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:08.529 13:43:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:23:08.529 13:43:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:23:08.529 13:43:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:23:08.529 13:43:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@905 -- # local i 00:23:08.529 13:43:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:23:08.529 13:43:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:23:08.529 13:43:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:23:08.529 13:43:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:08.529 13:43:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:08.529 13:43:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:08.529 13:43:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:23:08.529 13:43:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:08.529 13:43:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:08.529 [ 00:23:08.529 { 00:23:08.529 "name": "BaseBdev2", 00:23:08.529 "aliases": [ 00:23:08.529 "cc88465d-02d7-41ec-adff-4f78a5bcef94" 00:23:08.529 ], 00:23:08.529 "product_name": "Malloc disk", 00:23:08.529 "block_size": 4096, 00:23:08.529 "num_blocks": 8192, 00:23:08.529 "uuid": "cc88465d-02d7-41ec-adff-4f78a5bcef94", 00:23:08.529 "md_size": 32, 00:23:08.529 "md_interleave": false, 00:23:08.529 "dif_type": 0, 00:23:08.529 "assigned_rate_limits": { 00:23:08.529 "rw_ios_per_sec": 0, 00:23:08.529 "rw_mbytes_per_sec": 0, 00:23:08.529 "r_mbytes_per_sec": 0, 00:23:08.529 "w_mbytes_per_sec": 0 00:23:08.529 }, 00:23:08.529 "claimed": true, 00:23:08.529 "claim_type": "exclusive_write", 00:23:08.529 "zoned": false, 00:23:08.529 "supported_io_types": { 00:23:08.529 "read": true, 00:23:08.529 "write": true, 00:23:08.529 "unmap": true, 00:23:08.529 "flush": true, 00:23:08.529 "reset": true, 00:23:08.529 "nvme_admin": false, 00:23:08.529 "nvme_io": false, 00:23:08.529 "nvme_io_md": false, 00:23:08.529 "write_zeroes": true, 00:23:08.529 "zcopy": true, 00:23:08.529 "get_zone_info": false, 00:23:08.529 "zone_management": false, 00:23:08.529 "zone_append": false, 00:23:08.529 "compare": false, 00:23:08.529 "compare_and_write": false, 00:23:08.529 "abort": true, 00:23:08.529 "seek_hole": false, 00:23:08.529 "seek_data": false, 00:23:08.529 "copy": true, 00:23:08.529 "nvme_iov_md": false 00:23:08.529 }, 00:23:08.529 "memory_domains": [ 00:23:08.529 { 00:23:08.529 "dma_device_id": "system", 00:23:08.529 "dma_device_type": 1 00:23:08.529 }, 00:23:08.529 { 00:23:08.529 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:08.529 "dma_device_type": 2 00:23:08.529 } 00:23:08.529 ], 00:23:08.529 "driver_specific": {} 00:23:08.529 } 00:23:08.529 ] 00:23:08.529 13:43:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:08.529 13:43:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@911 -- # return 0 00:23:08.529 13:43:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:23:08.529 13:43:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:23:08.529 13:43:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:23:08.529 13:43:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:08.529 13:43:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:08.529 13:43:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:08.529 13:43:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:08.529 13:43:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:23:08.529 13:43:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:08.529 13:43:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:08.529 13:43:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:08.529 13:43:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:08.529 13:43:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:08.529 13:43:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:08.529 13:43:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:08.529 13:43:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:08.788 13:43:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:08.788 13:43:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:08.788 "name": "Existed_Raid", 00:23:08.788 "uuid": "4ddedee0-ca6a-4258-8d91-6e674900ef69", 00:23:08.788 "strip_size_kb": 0, 00:23:08.788 "state": "online", 00:23:08.788 "raid_level": "raid1", 00:23:08.788 "superblock": true, 00:23:08.788 "num_base_bdevs": 2, 00:23:08.788 "num_base_bdevs_discovered": 2, 00:23:08.788 "num_base_bdevs_operational": 2, 00:23:08.788 "base_bdevs_list": [ 00:23:08.788 { 00:23:08.788 "name": "BaseBdev1", 00:23:08.788 "uuid": "b9bca747-58ee-410a-8a11-fba098d93d9e", 00:23:08.788 "is_configured": true, 00:23:08.788 "data_offset": 256, 00:23:08.788 "data_size": 7936 00:23:08.788 }, 00:23:08.788 { 00:23:08.788 "name": "BaseBdev2", 00:23:08.788 "uuid": "cc88465d-02d7-41ec-adff-4f78a5bcef94", 00:23:08.788 "is_configured": true, 00:23:08.788 "data_offset": 256, 00:23:08.788 "data_size": 7936 00:23:08.788 } 00:23:08.788 ] 00:23:08.788 }' 00:23:08.788 13:43:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:08.788 13:43:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:09.048 13:43:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:23:09.048 13:43:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:23:09.048 13:43:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:23:09.048 13:43:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:23:09.048 13:43:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:23:09.048 13:43:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:23:09.048 13:43:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:23:09.048 13:43:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:09.048 13:43:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:09.048 13:43:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:23:09.048 [2024-11-20 13:43:08.410854] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:09.048 13:43:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:09.048 13:43:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:09.048 "name": "Existed_Raid", 00:23:09.048 "aliases": [ 00:23:09.048 "4ddedee0-ca6a-4258-8d91-6e674900ef69" 00:23:09.048 ], 00:23:09.048 "product_name": "Raid Volume", 00:23:09.048 "block_size": 4096, 00:23:09.048 "num_blocks": 7936, 00:23:09.048 "uuid": "4ddedee0-ca6a-4258-8d91-6e674900ef69", 00:23:09.048 "md_size": 32, 00:23:09.048 "md_interleave": false, 00:23:09.048 "dif_type": 0, 00:23:09.048 "assigned_rate_limits": { 00:23:09.048 "rw_ios_per_sec": 0, 00:23:09.048 "rw_mbytes_per_sec": 0, 00:23:09.048 "r_mbytes_per_sec": 0, 00:23:09.048 "w_mbytes_per_sec": 0 00:23:09.048 }, 00:23:09.048 "claimed": false, 00:23:09.048 "zoned": false, 00:23:09.048 "supported_io_types": { 00:23:09.048 "read": true, 00:23:09.048 "write": true, 00:23:09.048 "unmap": false, 00:23:09.048 "flush": false, 00:23:09.048 "reset": true, 00:23:09.048 "nvme_admin": false, 00:23:09.048 "nvme_io": false, 00:23:09.048 "nvme_io_md": false, 00:23:09.048 "write_zeroes": true, 00:23:09.048 "zcopy": false, 00:23:09.048 "get_zone_info": false, 00:23:09.048 "zone_management": false, 00:23:09.048 "zone_append": false, 00:23:09.048 "compare": false, 00:23:09.048 "compare_and_write": false, 00:23:09.048 "abort": false, 00:23:09.048 "seek_hole": false, 00:23:09.048 "seek_data": false, 00:23:09.049 "copy": false, 00:23:09.049 "nvme_iov_md": false 00:23:09.049 }, 00:23:09.049 "memory_domains": [ 00:23:09.049 { 00:23:09.049 "dma_device_id": "system", 00:23:09.049 "dma_device_type": 1 00:23:09.049 }, 00:23:09.049 { 00:23:09.049 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:09.049 "dma_device_type": 2 00:23:09.049 }, 00:23:09.049 { 00:23:09.049 "dma_device_id": "system", 00:23:09.049 "dma_device_type": 1 00:23:09.049 }, 00:23:09.049 { 00:23:09.049 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:09.049 "dma_device_type": 2 00:23:09.049 } 00:23:09.049 ], 00:23:09.049 "driver_specific": { 00:23:09.049 "raid": { 00:23:09.049 "uuid": "4ddedee0-ca6a-4258-8d91-6e674900ef69", 00:23:09.049 "strip_size_kb": 0, 00:23:09.049 "state": "online", 00:23:09.049 "raid_level": "raid1", 00:23:09.049 "superblock": true, 00:23:09.049 "num_base_bdevs": 2, 00:23:09.049 "num_base_bdevs_discovered": 2, 00:23:09.049 "num_base_bdevs_operational": 2, 00:23:09.049 "base_bdevs_list": [ 00:23:09.049 { 00:23:09.049 "name": "BaseBdev1", 00:23:09.049 "uuid": "b9bca747-58ee-410a-8a11-fba098d93d9e", 00:23:09.049 "is_configured": true, 00:23:09.049 "data_offset": 256, 00:23:09.049 "data_size": 7936 00:23:09.049 }, 00:23:09.049 { 00:23:09.049 "name": "BaseBdev2", 00:23:09.049 "uuid": "cc88465d-02d7-41ec-adff-4f78a5bcef94", 00:23:09.049 "is_configured": true, 00:23:09.049 "data_offset": 256, 00:23:09.049 "data_size": 7936 00:23:09.049 } 00:23:09.049 ] 00:23:09.049 } 00:23:09.049 } 00:23:09.049 }' 00:23:09.049 13:43:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:23:09.049 13:43:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:23:09.049 BaseBdev2' 00:23:09.049 13:43:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:09.049 13:43:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:23:09.049 13:43:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:09.049 13:43:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:23:09.049 13:43:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:09.049 13:43:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:09.049 13:43:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:09.307 13:43:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:09.307 13:43:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:23:09.307 13:43:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:23:09.307 13:43:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:09.307 13:43:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:23:09.307 13:43:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:09.307 13:43:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:09.307 13:43:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:09.307 13:43:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:09.307 13:43:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:23:09.308 13:43:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:23:09.308 13:43:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:23:09.308 13:43:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:09.308 13:43:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:09.308 [2024-11-20 13:43:08.626416] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:23:09.308 13:43:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:09.308 13:43:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@260 -- # local expected_state 00:23:09.308 13:43:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:23:09.308 13:43:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 00:23:09.308 13:43:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 00:23:09.308 13:43:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:23:09.308 13:43:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:23:09.308 13:43:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:09.308 13:43:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:09.308 13:43:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:09.308 13:43:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:09.308 13:43:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:23:09.308 13:43:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:09.308 13:43:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:09.308 13:43:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:09.308 13:43:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:09.308 13:43:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:09.308 13:43:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:09.308 13:43:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:09.308 13:43:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:09.308 13:43:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:09.308 13:43:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:09.308 "name": "Existed_Raid", 00:23:09.308 "uuid": "4ddedee0-ca6a-4258-8d91-6e674900ef69", 00:23:09.308 "strip_size_kb": 0, 00:23:09.308 "state": "online", 00:23:09.308 "raid_level": "raid1", 00:23:09.308 "superblock": true, 00:23:09.308 "num_base_bdevs": 2, 00:23:09.308 "num_base_bdevs_discovered": 1, 00:23:09.308 "num_base_bdevs_operational": 1, 00:23:09.308 "base_bdevs_list": [ 00:23:09.308 { 00:23:09.308 "name": null, 00:23:09.308 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:09.308 "is_configured": false, 00:23:09.308 "data_offset": 0, 00:23:09.308 "data_size": 7936 00:23:09.308 }, 00:23:09.308 { 00:23:09.308 "name": "BaseBdev2", 00:23:09.308 "uuid": "cc88465d-02d7-41ec-adff-4f78a5bcef94", 00:23:09.308 "is_configured": true, 00:23:09.308 "data_offset": 256, 00:23:09.308 "data_size": 7936 00:23:09.308 } 00:23:09.308 ] 00:23:09.308 }' 00:23:09.308 13:43:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:09.308 13:43:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:09.878 13:43:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:23:09.878 13:43:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:23:09.879 13:43:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:23:09.879 13:43:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:09.879 13:43:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:09.879 13:43:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:09.879 13:43:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:09.879 13:43:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:23:09.879 13:43:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:23:09.879 13:43:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:23:09.879 13:43:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:09.879 13:43:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:09.879 [2024-11-20 13:43:09.209271] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:23:09.879 [2024-11-20 13:43:09.209500] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:09.879 [2024-11-20 13:43:09.314487] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:09.879 [2024-11-20 13:43:09.314721] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:09.879 [2024-11-20 13:43:09.314871] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:23:09.879 13:43:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:09.879 13:43:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:23:09.879 13:43:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:23:09.879 13:43:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:09.879 13:43:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:09.879 13:43:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:09.879 13:43:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:23:09.879 13:43:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:10.137 13:43:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:23:10.137 13:43:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:23:10.137 13:43:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:23:10.137 13:43:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@326 -- # killprocess 87004 00:23:10.137 13:43:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@954 -- # '[' -z 87004 ']' 00:23:10.137 13:43:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@958 -- # kill -0 87004 00:23:10.137 13:43:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@959 -- # uname 00:23:10.137 13:43:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:10.137 13:43:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87004 00:23:10.137 13:43:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:10.137 killing process with pid 87004 00:23:10.137 13:43:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:10.137 13:43:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87004' 00:23:10.137 13:43:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@973 -- # kill 87004 00:23:10.137 [2024-11-20 13:43:09.411300] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:10.137 13:43:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@978 -- # wait 87004 00:23:10.137 [2024-11-20 13:43:09.427975] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:11.514 13:43:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@328 -- # return 0 00:23:11.514 00:23:11.514 real 0m4.955s 00:23:11.514 user 0m6.994s 00:23:11.514 sys 0m0.946s 00:23:11.514 ************************************ 00:23:11.514 END TEST raid_state_function_test_sb_md_separate 00:23:11.514 ************************************ 00:23:11.515 13:43:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:11.515 13:43:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:11.515 13:43:10 bdev_raid -- bdev/bdev_raid.sh@1005 -- # run_test raid_superblock_test_md_separate raid_superblock_test raid1 2 00:23:11.515 13:43:10 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:23:11.515 13:43:10 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:11.515 13:43:10 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:23:11.515 ************************************ 00:23:11.515 START TEST raid_superblock_test_md_separate 00:23:11.515 ************************************ 00:23:11.515 13:43:10 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:23:11.515 13:43:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:23:11.515 13:43:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:23:11.515 13:43:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:23:11.515 13:43:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:23:11.515 13:43:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:23:11.515 13:43:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:23:11.515 13:43:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:23:11.515 13:43:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:23:11.515 13:43:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:23:11.515 13:43:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@399 -- # local strip_size 00:23:11.515 13:43:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:23:11.515 13:43:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:23:11.515 13:43:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:23:11.515 13:43:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:23:11.515 13:43:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:23:11.515 13:43:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@412 -- # raid_pid=87251 00:23:11.515 13:43:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@413 -- # waitforlisten 87251 00:23:11.515 13:43:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:23:11.515 13:43:10 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@835 -- # '[' -z 87251 ']' 00:23:11.515 13:43:10 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:11.515 13:43:10 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:11.515 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:11.515 13:43:10 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:11.515 13:43:10 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:11.515 13:43:10 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:11.515 [2024-11-20 13:43:10.741418] Starting SPDK v25.01-pre git sha1 82b85d9ca / DPDK 24.03.0 initialization... 00:23:11.515 [2024-11-20 13:43:10.741545] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87251 ] 00:23:11.515 [2024-11-20 13:43:10.923150] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:11.773 [2024-11-20 13:43:11.034096] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:11.773 [2024-11-20 13:43:11.234826] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:11.773 [2024-11-20 13:43:11.234888] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:12.342 13:43:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:12.342 13:43:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@868 -- # return 0 00:23:12.342 13:43:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:23:12.342 13:43:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:23:12.342 13:43:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:23:12.342 13:43:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:23:12.342 13:43:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:23:12.342 13:43:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:23:12.342 13:43:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:23:12.342 13:43:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:23:12.342 13:43:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc1 00:23:12.342 13:43:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:12.342 13:43:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:12.342 malloc1 00:23:12.342 13:43:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:12.342 13:43:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:23:12.342 13:43:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:12.342 13:43:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:12.342 [2024-11-20 13:43:11.618497] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:23:12.342 [2024-11-20 13:43:11.618558] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:12.342 [2024-11-20 13:43:11.618583] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:23:12.342 [2024-11-20 13:43:11.618595] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:12.342 [2024-11-20 13:43:11.620713] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:12.342 [2024-11-20 13:43:11.620901] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:23:12.342 pt1 00:23:12.342 13:43:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:12.342 13:43:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:23:12.342 13:43:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:23:12.342 13:43:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:23:12.342 13:43:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:23:12.342 13:43:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:23:12.342 13:43:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:23:12.342 13:43:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:23:12.342 13:43:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:23:12.342 13:43:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc2 00:23:12.342 13:43:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:12.342 13:43:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:12.342 malloc2 00:23:12.342 13:43:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:12.342 13:43:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:23:12.342 13:43:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:12.342 13:43:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:12.342 [2024-11-20 13:43:11.674576] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:23:12.342 [2024-11-20 13:43:11.674746] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:12.342 [2024-11-20 13:43:11.674804] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:23:12.342 [2024-11-20 13:43:11.674881] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:12.342 [2024-11-20 13:43:11.676990] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:12.342 [2024-11-20 13:43:11.677139] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:23:12.342 pt2 00:23:12.342 13:43:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:12.342 13:43:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:23:12.342 13:43:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:23:12.342 13:43:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:23:12.342 13:43:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:12.342 13:43:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:12.342 [2024-11-20 13:43:11.686588] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:23:12.342 [2024-11-20 13:43:11.688700] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:23:12.342 [2024-11-20 13:43:11.688984] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:23:12.342 [2024-11-20 13:43:11.689113] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:23:12.342 [2024-11-20 13:43:11.689228] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:23:12.342 [2024-11-20 13:43:11.689534] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:23:12.342 [2024-11-20 13:43:11.689639] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:23:12.342 [2024-11-20 13:43:11.689835] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:12.342 13:43:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:12.342 13:43:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:23:12.342 13:43:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:12.342 13:43:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:12.342 13:43:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:12.342 13:43:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:12.342 13:43:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:23:12.342 13:43:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:12.342 13:43:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:12.342 13:43:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:12.342 13:43:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:12.342 13:43:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:12.342 13:43:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:12.342 13:43:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:12.342 13:43:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:12.342 13:43:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:12.342 13:43:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:12.342 "name": "raid_bdev1", 00:23:12.342 "uuid": "6ffa13ee-53e9-4169-9eea-19955e7d04a7", 00:23:12.342 "strip_size_kb": 0, 00:23:12.342 "state": "online", 00:23:12.342 "raid_level": "raid1", 00:23:12.342 "superblock": true, 00:23:12.342 "num_base_bdevs": 2, 00:23:12.342 "num_base_bdevs_discovered": 2, 00:23:12.342 "num_base_bdevs_operational": 2, 00:23:12.342 "base_bdevs_list": [ 00:23:12.342 { 00:23:12.342 "name": "pt1", 00:23:12.343 "uuid": "00000000-0000-0000-0000-000000000001", 00:23:12.343 "is_configured": true, 00:23:12.343 "data_offset": 256, 00:23:12.343 "data_size": 7936 00:23:12.343 }, 00:23:12.343 { 00:23:12.343 "name": "pt2", 00:23:12.343 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:12.343 "is_configured": true, 00:23:12.343 "data_offset": 256, 00:23:12.343 "data_size": 7936 00:23:12.343 } 00:23:12.343 ] 00:23:12.343 }' 00:23:12.343 13:43:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:12.343 13:43:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:12.909 13:43:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:23:12.909 13:43:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:23:12.909 13:43:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:23:12.909 13:43:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:23:12.909 13:43:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:23:12.909 13:43:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:23:12.909 13:43:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:23:12.909 13:43:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:12.909 13:43:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:12.909 13:43:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:23:12.909 [2024-11-20 13:43:12.138732] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:12.909 13:43:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:12.909 13:43:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:12.909 "name": "raid_bdev1", 00:23:12.909 "aliases": [ 00:23:12.909 "6ffa13ee-53e9-4169-9eea-19955e7d04a7" 00:23:12.909 ], 00:23:12.909 "product_name": "Raid Volume", 00:23:12.909 "block_size": 4096, 00:23:12.909 "num_blocks": 7936, 00:23:12.909 "uuid": "6ffa13ee-53e9-4169-9eea-19955e7d04a7", 00:23:12.909 "md_size": 32, 00:23:12.909 "md_interleave": false, 00:23:12.909 "dif_type": 0, 00:23:12.909 "assigned_rate_limits": { 00:23:12.909 "rw_ios_per_sec": 0, 00:23:12.909 "rw_mbytes_per_sec": 0, 00:23:12.909 "r_mbytes_per_sec": 0, 00:23:12.909 "w_mbytes_per_sec": 0 00:23:12.909 }, 00:23:12.909 "claimed": false, 00:23:12.909 "zoned": false, 00:23:12.909 "supported_io_types": { 00:23:12.909 "read": true, 00:23:12.909 "write": true, 00:23:12.909 "unmap": false, 00:23:12.909 "flush": false, 00:23:12.909 "reset": true, 00:23:12.909 "nvme_admin": false, 00:23:12.909 "nvme_io": false, 00:23:12.909 "nvme_io_md": false, 00:23:12.909 "write_zeroes": true, 00:23:12.909 "zcopy": false, 00:23:12.909 "get_zone_info": false, 00:23:12.909 "zone_management": false, 00:23:12.909 "zone_append": false, 00:23:12.909 "compare": false, 00:23:12.909 "compare_and_write": false, 00:23:12.909 "abort": false, 00:23:12.909 "seek_hole": false, 00:23:12.909 "seek_data": false, 00:23:12.909 "copy": false, 00:23:12.909 "nvme_iov_md": false 00:23:12.909 }, 00:23:12.909 "memory_domains": [ 00:23:12.909 { 00:23:12.909 "dma_device_id": "system", 00:23:12.909 "dma_device_type": 1 00:23:12.909 }, 00:23:12.909 { 00:23:12.909 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:12.909 "dma_device_type": 2 00:23:12.909 }, 00:23:12.909 { 00:23:12.909 "dma_device_id": "system", 00:23:12.909 "dma_device_type": 1 00:23:12.909 }, 00:23:12.909 { 00:23:12.909 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:12.909 "dma_device_type": 2 00:23:12.909 } 00:23:12.909 ], 00:23:12.909 "driver_specific": { 00:23:12.909 "raid": { 00:23:12.909 "uuid": "6ffa13ee-53e9-4169-9eea-19955e7d04a7", 00:23:12.909 "strip_size_kb": 0, 00:23:12.909 "state": "online", 00:23:12.909 "raid_level": "raid1", 00:23:12.909 "superblock": true, 00:23:12.909 "num_base_bdevs": 2, 00:23:12.909 "num_base_bdevs_discovered": 2, 00:23:12.909 "num_base_bdevs_operational": 2, 00:23:12.909 "base_bdevs_list": [ 00:23:12.909 { 00:23:12.909 "name": "pt1", 00:23:12.909 "uuid": "00000000-0000-0000-0000-000000000001", 00:23:12.909 "is_configured": true, 00:23:12.909 "data_offset": 256, 00:23:12.909 "data_size": 7936 00:23:12.909 }, 00:23:12.909 { 00:23:12.909 "name": "pt2", 00:23:12.909 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:12.909 "is_configured": true, 00:23:12.909 "data_offset": 256, 00:23:12.909 "data_size": 7936 00:23:12.909 } 00:23:12.909 ] 00:23:12.909 } 00:23:12.909 } 00:23:12.909 }' 00:23:12.909 13:43:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:23:12.909 13:43:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:23:12.909 pt2' 00:23:12.909 13:43:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:12.909 13:43:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:23:12.909 13:43:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:12.909 13:43:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:23:12.909 13:43:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:12.909 13:43:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:12.909 13:43:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:12.909 13:43:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:12.909 13:43:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:23:12.909 13:43:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:23:12.909 13:43:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:12.909 13:43:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:23:12.909 13:43:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:12.909 13:43:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:12.909 13:43:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:12.909 13:43:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:12.909 13:43:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:23:12.909 13:43:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:23:12.909 13:43:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:23:12.909 13:43:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:12.909 13:43:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:12.909 13:43:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:23:12.909 [2024-11-20 13:43:12.378628] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:13.169 13:43:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:13.169 13:43:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=6ffa13ee-53e9-4169-9eea-19955e7d04a7 00:23:13.169 13:43:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@436 -- # '[' -z 6ffa13ee-53e9-4169-9eea-19955e7d04a7 ']' 00:23:13.169 13:43:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:23:13.169 13:43:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:13.169 13:43:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:13.169 [2024-11-20 13:43:12.426386] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:13.169 [2024-11-20 13:43:12.426526] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:13.169 [2024-11-20 13:43:12.426641] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:13.169 [2024-11-20 13:43:12.426699] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:13.169 [2024-11-20 13:43:12.426714] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:23:13.169 13:43:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:13.169 13:43:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:13.169 13:43:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:13.169 13:43:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:13.169 13:43:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:23:13.169 13:43:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:13.169 13:43:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:23:13.169 13:43:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:23:13.169 13:43:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:23:13.169 13:43:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:23:13.169 13:43:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:13.169 13:43:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:13.169 13:43:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:13.169 13:43:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:23:13.169 13:43:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:23:13.169 13:43:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:13.169 13:43:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:13.169 13:43:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:13.169 13:43:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:23:13.169 13:43:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:13.169 13:43:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:13.169 13:43:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:23:13.169 13:43:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:13.169 13:43:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:23:13.169 13:43:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:23:13.169 13:43:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@652 -- # local es=0 00:23:13.169 13:43:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:23:13.169 13:43:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:23:13.169 13:43:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:13.169 13:43:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:23:13.169 13:43:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:13.169 13:43:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:23:13.169 13:43:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:13.169 13:43:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:13.169 [2024-11-20 13:43:12.554396] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:23:13.169 [2024-11-20 13:43:12.556506] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:23:13.169 [2024-11-20 13:43:12.556587] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:23:13.169 [2024-11-20 13:43:12.556651] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:23:13.169 [2024-11-20 13:43:12.556669] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:13.169 [2024-11-20 13:43:12.556681] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:23:13.169 request: 00:23:13.169 { 00:23:13.169 "name": "raid_bdev1", 00:23:13.169 "raid_level": "raid1", 00:23:13.169 "base_bdevs": [ 00:23:13.169 "malloc1", 00:23:13.169 "malloc2" 00:23:13.169 ], 00:23:13.169 "superblock": false, 00:23:13.169 "method": "bdev_raid_create", 00:23:13.169 "req_id": 1 00:23:13.169 } 00:23:13.169 Got JSON-RPC error response 00:23:13.169 response: 00:23:13.169 { 00:23:13.169 "code": -17, 00:23:13.169 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:23:13.169 } 00:23:13.169 13:43:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:23:13.169 13:43:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@655 -- # es=1 00:23:13.169 13:43:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:13.169 13:43:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:13.169 13:43:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:13.169 13:43:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:23:13.169 13:43:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:13.169 13:43:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:13.169 13:43:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:13.169 13:43:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:13.170 13:43:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:23:13.170 13:43:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:23:13.170 13:43:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:23:13.170 13:43:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:13.170 13:43:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:13.170 [2024-11-20 13:43:12.610380] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:23:13.170 [2024-11-20 13:43:12.610554] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:13.170 [2024-11-20 13:43:12.610608] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:23:13.170 [2024-11-20 13:43:12.610770] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:13.170 [2024-11-20 13:43:12.612993] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:13.170 [2024-11-20 13:43:12.613147] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:23:13.170 [2024-11-20 13:43:12.613275] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:23:13.170 [2024-11-20 13:43:12.613412] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:23:13.170 pt1 00:23:13.170 13:43:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:13.170 13:43:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:23:13.170 13:43:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:13.170 13:43:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:13.170 13:43:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:13.170 13:43:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:13.170 13:43:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:23:13.170 13:43:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:13.170 13:43:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:13.170 13:43:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:13.170 13:43:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:13.170 13:43:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:13.170 13:43:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:13.170 13:43:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:13.170 13:43:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:13.170 13:43:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:13.429 13:43:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:13.429 "name": "raid_bdev1", 00:23:13.429 "uuid": "6ffa13ee-53e9-4169-9eea-19955e7d04a7", 00:23:13.429 "strip_size_kb": 0, 00:23:13.429 "state": "configuring", 00:23:13.429 "raid_level": "raid1", 00:23:13.429 "superblock": true, 00:23:13.429 "num_base_bdevs": 2, 00:23:13.429 "num_base_bdevs_discovered": 1, 00:23:13.429 "num_base_bdevs_operational": 2, 00:23:13.429 "base_bdevs_list": [ 00:23:13.429 { 00:23:13.429 "name": "pt1", 00:23:13.429 "uuid": "00000000-0000-0000-0000-000000000001", 00:23:13.429 "is_configured": true, 00:23:13.429 "data_offset": 256, 00:23:13.429 "data_size": 7936 00:23:13.429 }, 00:23:13.429 { 00:23:13.429 "name": null, 00:23:13.429 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:13.429 "is_configured": false, 00:23:13.429 "data_offset": 256, 00:23:13.429 "data_size": 7936 00:23:13.429 } 00:23:13.429 ] 00:23:13.429 }' 00:23:13.429 13:43:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:13.429 13:43:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:13.688 13:43:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:23:13.688 13:43:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:23:13.688 13:43:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:23:13.688 13:43:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:23:13.688 13:43:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:13.688 13:43:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:13.688 [2024-11-20 13:43:13.002398] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:23:13.689 [2024-11-20 13:43:13.002486] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:13.689 [2024-11-20 13:43:13.002509] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:23:13.689 [2024-11-20 13:43:13.002523] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:13.689 [2024-11-20 13:43:13.002765] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:13.689 [2024-11-20 13:43:13.002799] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:23:13.689 [2024-11-20 13:43:13.002854] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:23:13.689 [2024-11-20 13:43:13.002879] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:23:13.689 [2024-11-20 13:43:13.002985] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:23:13.689 [2024-11-20 13:43:13.002998] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:23:13.689 [2024-11-20 13:43:13.003073] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:23:13.689 [2024-11-20 13:43:13.003395] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:23:13.689 [2024-11-20 13:43:13.003506] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:23:13.689 [2024-11-20 13:43:13.003712] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:13.689 pt2 00:23:13.689 13:43:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:13.689 13:43:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:23:13.689 13:43:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:23:13.689 13:43:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:23:13.689 13:43:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:13.689 13:43:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:13.689 13:43:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:13.689 13:43:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:13.689 13:43:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:23:13.689 13:43:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:13.689 13:43:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:13.689 13:43:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:13.689 13:43:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:13.689 13:43:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:13.689 13:43:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:13.689 13:43:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:13.689 13:43:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:13.689 13:43:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:13.689 13:43:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:13.689 "name": "raid_bdev1", 00:23:13.689 "uuid": "6ffa13ee-53e9-4169-9eea-19955e7d04a7", 00:23:13.689 "strip_size_kb": 0, 00:23:13.689 "state": "online", 00:23:13.689 "raid_level": "raid1", 00:23:13.689 "superblock": true, 00:23:13.689 "num_base_bdevs": 2, 00:23:13.689 "num_base_bdevs_discovered": 2, 00:23:13.689 "num_base_bdevs_operational": 2, 00:23:13.689 "base_bdevs_list": [ 00:23:13.689 { 00:23:13.689 "name": "pt1", 00:23:13.689 "uuid": "00000000-0000-0000-0000-000000000001", 00:23:13.689 "is_configured": true, 00:23:13.689 "data_offset": 256, 00:23:13.689 "data_size": 7936 00:23:13.689 }, 00:23:13.689 { 00:23:13.689 "name": "pt2", 00:23:13.689 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:13.689 "is_configured": true, 00:23:13.689 "data_offset": 256, 00:23:13.689 "data_size": 7936 00:23:13.689 } 00:23:13.689 ] 00:23:13.689 }' 00:23:13.689 13:43:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:13.689 13:43:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:13.948 13:43:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:23:13.948 13:43:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:23:13.948 13:43:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:23:13.949 13:43:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:23:13.949 13:43:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:23:13.949 13:43:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:23:13.949 13:43:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:23:13.949 13:43:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:23:13.949 13:43:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:13.949 13:43:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:13.949 [2024-11-20 13:43:13.426678] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:14.208 13:43:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:14.208 13:43:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:14.208 "name": "raid_bdev1", 00:23:14.208 "aliases": [ 00:23:14.208 "6ffa13ee-53e9-4169-9eea-19955e7d04a7" 00:23:14.208 ], 00:23:14.208 "product_name": "Raid Volume", 00:23:14.208 "block_size": 4096, 00:23:14.208 "num_blocks": 7936, 00:23:14.208 "uuid": "6ffa13ee-53e9-4169-9eea-19955e7d04a7", 00:23:14.208 "md_size": 32, 00:23:14.208 "md_interleave": false, 00:23:14.208 "dif_type": 0, 00:23:14.208 "assigned_rate_limits": { 00:23:14.208 "rw_ios_per_sec": 0, 00:23:14.208 "rw_mbytes_per_sec": 0, 00:23:14.208 "r_mbytes_per_sec": 0, 00:23:14.208 "w_mbytes_per_sec": 0 00:23:14.208 }, 00:23:14.208 "claimed": false, 00:23:14.208 "zoned": false, 00:23:14.208 "supported_io_types": { 00:23:14.208 "read": true, 00:23:14.208 "write": true, 00:23:14.208 "unmap": false, 00:23:14.208 "flush": false, 00:23:14.208 "reset": true, 00:23:14.208 "nvme_admin": false, 00:23:14.208 "nvme_io": false, 00:23:14.208 "nvme_io_md": false, 00:23:14.208 "write_zeroes": true, 00:23:14.208 "zcopy": false, 00:23:14.208 "get_zone_info": false, 00:23:14.208 "zone_management": false, 00:23:14.208 "zone_append": false, 00:23:14.208 "compare": false, 00:23:14.208 "compare_and_write": false, 00:23:14.208 "abort": false, 00:23:14.208 "seek_hole": false, 00:23:14.208 "seek_data": false, 00:23:14.208 "copy": false, 00:23:14.208 "nvme_iov_md": false 00:23:14.208 }, 00:23:14.208 "memory_domains": [ 00:23:14.208 { 00:23:14.208 "dma_device_id": "system", 00:23:14.208 "dma_device_type": 1 00:23:14.208 }, 00:23:14.208 { 00:23:14.208 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:14.208 "dma_device_type": 2 00:23:14.208 }, 00:23:14.208 { 00:23:14.208 "dma_device_id": "system", 00:23:14.208 "dma_device_type": 1 00:23:14.208 }, 00:23:14.208 { 00:23:14.208 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:14.208 "dma_device_type": 2 00:23:14.208 } 00:23:14.208 ], 00:23:14.208 "driver_specific": { 00:23:14.208 "raid": { 00:23:14.208 "uuid": "6ffa13ee-53e9-4169-9eea-19955e7d04a7", 00:23:14.208 "strip_size_kb": 0, 00:23:14.208 "state": "online", 00:23:14.208 "raid_level": "raid1", 00:23:14.208 "superblock": true, 00:23:14.208 "num_base_bdevs": 2, 00:23:14.208 "num_base_bdevs_discovered": 2, 00:23:14.208 "num_base_bdevs_operational": 2, 00:23:14.208 "base_bdevs_list": [ 00:23:14.208 { 00:23:14.208 "name": "pt1", 00:23:14.208 "uuid": "00000000-0000-0000-0000-000000000001", 00:23:14.208 "is_configured": true, 00:23:14.208 "data_offset": 256, 00:23:14.208 "data_size": 7936 00:23:14.208 }, 00:23:14.208 { 00:23:14.208 "name": "pt2", 00:23:14.208 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:14.208 "is_configured": true, 00:23:14.208 "data_offset": 256, 00:23:14.208 "data_size": 7936 00:23:14.208 } 00:23:14.208 ] 00:23:14.208 } 00:23:14.208 } 00:23:14.208 }' 00:23:14.208 13:43:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:23:14.208 13:43:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:23:14.208 pt2' 00:23:14.208 13:43:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:14.208 13:43:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:23:14.208 13:43:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:14.208 13:43:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:14.208 13:43:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:23:14.208 13:43:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:14.208 13:43:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:14.208 13:43:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:14.208 13:43:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:23:14.208 13:43:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:23:14.208 13:43:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:14.208 13:43:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:14.208 13:43:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:23:14.208 13:43:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:14.208 13:43:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:14.208 13:43:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:14.208 13:43:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:23:14.208 13:43:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:23:14.208 13:43:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:23:14.208 13:43:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:23:14.208 13:43:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:14.208 13:43:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:14.208 [2024-11-20 13:43:13.642692] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:14.208 13:43:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:14.208 13:43:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # '[' 6ffa13ee-53e9-4169-9eea-19955e7d04a7 '!=' 6ffa13ee-53e9-4169-9eea-19955e7d04a7 ']' 00:23:14.208 13:43:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:23:14.208 13:43:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 00:23:14.208 13:43:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 00:23:14.208 13:43:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:23:14.208 13:43:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:14.208 13:43:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:14.467 [2024-11-20 13:43:13.690464] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:23:14.467 13:43:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:14.467 13:43:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:14.467 13:43:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:14.467 13:43:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:14.467 13:43:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:14.467 13:43:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:14.467 13:43:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:23:14.467 13:43:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:14.467 13:43:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:14.467 13:43:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:14.467 13:43:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:14.467 13:43:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:14.467 13:43:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:14.467 13:43:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:14.467 13:43:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:14.467 13:43:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:14.467 13:43:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:14.467 "name": "raid_bdev1", 00:23:14.467 "uuid": "6ffa13ee-53e9-4169-9eea-19955e7d04a7", 00:23:14.467 "strip_size_kb": 0, 00:23:14.467 "state": "online", 00:23:14.467 "raid_level": "raid1", 00:23:14.467 "superblock": true, 00:23:14.467 "num_base_bdevs": 2, 00:23:14.467 "num_base_bdevs_discovered": 1, 00:23:14.467 "num_base_bdevs_operational": 1, 00:23:14.467 "base_bdevs_list": [ 00:23:14.467 { 00:23:14.467 "name": null, 00:23:14.467 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:14.467 "is_configured": false, 00:23:14.467 "data_offset": 0, 00:23:14.467 "data_size": 7936 00:23:14.467 }, 00:23:14.467 { 00:23:14.467 "name": "pt2", 00:23:14.467 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:14.467 "is_configured": true, 00:23:14.467 "data_offset": 256, 00:23:14.467 "data_size": 7936 00:23:14.467 } 00:23:14.467 ] 00:23:14.467 }' 00:23:14.467 13:43:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:14.467 13:43:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:14.725 13:43:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:23:14.725 13:43:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:14.725 13:43:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:14.725 [2024-11-20 13:43:14.122208] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:14.725 [2024-11-20 13:43:14.122239] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:14.725 [2024-11-20 13:43:14.122329] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:14.725 [2024-11-20 13:43:14.122376] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:14.725 [2024-11-20 13:43:14.122390] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:23:14.725 13:43:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:14.725 13:43:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:14.725 13:43:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:14.725 13:43:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:23:14.725 13:43:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:14.725 13:43:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:14.725 13:43:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:23:14.725 13:43:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:23:14.725 13:43:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:23:14.725 13:43:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:23:14.725 13:43:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:23:14.725 13:43:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:14.725 13:43:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:14.725 13:43:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:14.725 13:43:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:23:14.725 13:43:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:23:14.725 13:43:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:23:14.725 13:43:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:23:14.725 13:43:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@519 -- # i=1 00:23:14.725 13:43:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:23:14.725 13:43:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:14.725 13:43:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:14.725 [2024-11-20 13:43:14.194188] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:23:14.725 [2024-11-20 13:43:14.194397] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:14.725 [2024-11-20 13:43:14.194450] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:23:14.725 [2024-11-20 13:43:14.194558] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:14.725 [2024-11-20 13:43:14.196813] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:14.725 [2024-11-20 13:43:14.196962] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:23:14.725 [2024-11-20 13:43:14.197092] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:23:14.725 [2024-11-20 13:43:14.197180] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:23:14.725 [2024-11-20 13:43:14.197354] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:23:14.725 [2024-11-20 13:43:14.197400] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:23:14.725 [2024-11-20 13:43:14.197495] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:23:14.725 [2024-11-20 13:43:14.197653] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:23:14.725 [2024-11-20 13:43:14.197737] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:23:14.725 [2024-11-20 13:43:14.197873] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:14.725 pt2 00:23:14.725 13:43:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:14.725 13:43:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:14.725 13:43:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:14.725 13:43:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:14.725 13:43:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:14.725 13:43:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:14.725 13:43:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:23:14.725 13:43:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:14.725 13:43:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:14.725 13:43:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:14.725 13:43:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:14.725 13:43:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:14.725 13:43:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:14.725 13:43:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:14.725 13:43:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:14.983 13:43:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:14.983 13:43:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:14.983 "name": "raid_bdev1", 00:23:14.983 "uuid": "6ffa13ee-53e9-4169-9eea-19955e7d04a7", 00:23:14.983 "strip_size_kb": 0, 00:23:14.983 "state": "online", 00:23:14.983 "raid_level": "raid1", 00:23:14.983 "superblock": true, 00:23:14.983 "num_base_bdevs": 2, 00:23:14.983 "num_base_bdevs_discovered": 1, 00:23:14.983 "num_base_bdevs_operational": 1, 00:23:14.983 "base_bdevs_list": [ 00:23:14.983 { 00:23:14.983 "name": null, 00:23:14.983 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:14.983 "is_configured": false, 00:23:14.983 "data_offset": 256, 00:23:14.983 "data_size": 7936 00:23:14.983 }, 00:23:14.983 { 00:23:14.983 "name": "pt2", 00:23:14.983 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:14.983 "is_configured": true, 00:23:14.983 "data_offset": 256, 00:23:14.983 "data_size": 7936 00:23:14.983 } 00:23:14.983 ] 00:23:14.983 }' 00:23:14.983 13:43:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:14.983 13:43:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:15.241 13:43:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:23:15.241 13:43:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:15.241 13:43:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:15.241 [2024-11-20 13:43:14.630178] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:15.241 [2024-11-20 13:43:14.630348] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:15.241 [2024-11-20 13:43:14.630441] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:15.241 [2024-11-20 13:43:14.630494] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:15.241 [2024-11-20 13:43:14.630505] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:23:15.241 13:43:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:15.241 13:43:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:15.241 13:43:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:15.241 13:43:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:15.241 13:43:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:23:15.241 13:43:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:15.241 13:43:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:23:15.241 13:43:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:23:15.241 13:43:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:23:15.241 13:43:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:23:15.241 13:43:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:15.241 13:43:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:15.241 [2024-11-20 13:43:14.694240] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:23:15.241 [2024-11-20 13:43:14.694325] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:15.241 [2024-11-20 13:43:14.694349] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:23:15.241 [2024-11-20 13:43:14.694362] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:15.241 [2024-11-20 13:43:14.696631] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:15.241 [2024-11-20 13:43:14.696675] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:23:15.241 [2024-11-20 13:43:14.696744] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:23:15.241 [2024-11-20 13:43:14.696797] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:23:15.241 [2024-11-20 13:43:14.696921] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:23:15.241 [2024-11-20 13:43:14.696933] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:15.241 [2024-11-20 13:43:14.696954] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:23:15.241 [2024-11-20 13:43:14.697018] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:23:15.241 [2024-11-20 13:43:14.697110] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:23:15.241 [2024-11-20 13:43:14.697121] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:23:15.241 [2024-11-20 13:43:14.697193] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:23:15.241 [2024-11-20 13:43:14.697304] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:23:15.241 [2024-11-20 13:43:14.697363] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:23:15.241 [2024-11-20 13:43:14.697492] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:15.241 pt1 00:23:15.241 13:43:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:15.241 13:43:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:23:15.241 13:43:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:15.241 13:43:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:15.241 13:43:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:15.241 13:43:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:15.241 13:43:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:15.241 13:43:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:23:15.241 13:43:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:15.241 13:43:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:15.241 13:43:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:15.241 13:43:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:15.241 13:43:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:15.241 13:43:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:15.241 13:43:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:15.241 13:43:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:15.241 13:43:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:15.576 13:43:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:15.576 "name": "raid_bdev1", 00:23:15.576 "uuid": "6ffa13ee-53e9-4169-9eea-19955e7d04a7", 00:23:15.576 "strip_size_kb": 0, 00:23:15.576 "state": "online", 00:23:15.576 "raid_level": "raid1", 00:23:15.576 "superblock": true, 00:23:15.576 "num_base_bdevs": 2, 00:23:15.576 "num_base_bdevs_discovered": 1, 00:23:15.576 "num_base_bdevs_operational": 1, 00:23:15.576 "base_bdevs_list": [ 00:23:15.576 { 00:23:15.576 "name": null, 00:23:15.576 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:15.576 "is_configured": false, 00:23:15.576 "data_offset": 256, 00:23:15.576 "data_size": 7936 00:23:15.576 }, 00:23:15.576 { 00:23:15.576 "name": "pt2", 00:23:15.576 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:15.576 "is_configured": true, 00:23:15.576 "data_offset": 256, 00:23:15.576 "data_size": 7936 00:23:15.576 } 00:23:15.576 ] 00:23:15.576 }' 00:23:15.576 13:43:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:15.576 13:43:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:15.873 13:43:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:23:15.873 13:43:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:15.873 13:43:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:23:15.873 13:43:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:15.873 13:43:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:15.873 13:43:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:23:15.873 13:43:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:23:15.873 13:43:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:23:15.873 13:43:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:15.873 13:43:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:15.873 [2024-11-20 13:43:15.185720] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:15.873 13:43:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:15.873 13:43:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # '[' 6ffa13ee-53e9-4169-9eea-19955e7d04a7 '!=' 6ffa13ee-53e9-4169-9eea-19955e7d04a7 ']' 00:23:15.873 13:43:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@563 -- # killprocess 87251 00:23:15.873 13:43:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@954 -- # '[' -z 87251 ']' 00:23:15.873 13:43:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@958 -- # kill -0 87251 00:23:15.873 13:43:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@959 -- # uname 00:23:15.873 13:43:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:15.873 13:43:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87251 00:23:15.873 13:43:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:15.873 killing process with pid 87251 00:23:15.873 13:43:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:15.873 13:43:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87251' 00:23:15.873 13:43:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@973 -- # kill 87251 00:23:15.873 [2024-11-20 13:43:15.273742] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:15.873 [2024-11-20 13:43:15.273835] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:15.873 13:43:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@978 -- # wait 87251 00:23:15.873 [2024-11-20 13:43:15.273884] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:15.873 [2024-11-20 13:43:15.273904] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:23:16.132 [2024-11-20 13:43:15.496387] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:17.510 13:43:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@565 -- # return 0 00:23:17.510 ************************************ 00:23:17.510 END TEST raid_superblock_test_md_separate 00:23:17.510 ************************************ 00:23:17.510 00:23:17.510 real 0m5.992s 00:23:17.510 user 0m8.987s 00:23:17.510 sys 0m1.224s 00:23:17.510 13:43:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:17.510 13:43:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:17.510 13:43:16 bdev_raid -- bdev/bdev_raid.sh@1006 -- # '[' true = true ']' 00:23:17.510 13:43:16 bdev_raid -- bdev/bdev_raid.sh@1007 -- # run_test raid_rebuild_test_sb_md_separate raid_rebuild_test raid1 2 true false true 00:23:17.510 13:43:16 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:23:17.511 13:43:16 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:17.511 13:43:16 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:23:17.511 ************************************ 00:23:17.511 START TEST raid_rebuild_test_sb_md_separate 00:23:17.511 ************************************ 00:23:17.511 13:43:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false true 00:23:17.511 13:43:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:23:17.511 13:43:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:23:17.511 13:43:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:23:17.511 13:43:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:23:17.511 13:43:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # local verify=true 00:23:17.511 13:43:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:23:17.511 13:43:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:23:17.511 13:43:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:23:17.511 13:43:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:23:17.511 13:43:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:23:17.511 13:43:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:23:17.511 13:43:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:23:17.511 13:43:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:23:17.511 13:43:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:23:17.511 13:43:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:23:17.511 13:43:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:23:17.511 13:43:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # local strip_size 00:23:17.511 13:43:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@577 -- # local create_arg 00:23:17.511 13:43:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:23:17.511 13:43:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@579 -- # local data_offset 00:23:17.511 13:43:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:23:17.511 13:43:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:23:17.511 13:43:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:23:17.511 13:43:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:23:17.511 13:43:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@597 -- # raid_pid=87575 00:23:17.511 13:43:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@598 -- # waitforlisten 87575 00:23:17.511 13:43:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:23:17.511 13:43:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@835 -- # '[' -z 87575 ']' 00:23:17.511 13:43:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:17.511 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:17.511 13:43:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:17.511 13:43:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:17.511 13:43:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:17.511 13:43:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:17.511 I/O size of 3145728 is greater than zero copy threshold (65536). 00:23:17.511 Zero copy mechanism will not be used. 00:23:17.511 [2024-11-20 13:43:16.820946] Starting SPDK v25.01-pre git sha1 82b85d9ca / DPDK 24.03.0 initialization... 00:23:17.511 [2024-11-20 13:43:16.821089] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87575 ] 00:23:17.770 [2024-11-20 13:43:17.003093] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:17.770 [2024-11-20 13:43:17.118969] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:18.030 [2024-11-20 13:43:17.323911] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:18.030 [2024-11-20 13:43:17.323979] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:18.289 13:43:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:18.289 13:43:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@868 -- # return 0 00:23:18.289 13:43:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:23:18.289 13:43:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1_malloc 00:23:18.289 13:43:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:18.289 13:43:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:18.289 BaseBdev1_malloc 00:23:18.290 13:43:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:18.290 13:43:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:23:18.290 13:43:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:18.290 13:43:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:18.290 [2024-11-20 13:43:17.711141] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:23:18.290 [2024-11-20 13:43:17.711206] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:18.290 [2024-11-20 13:43:17.711230] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:23:18.290 [2024-11-20 13:43:17.711245] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:18.290 [2024-11-20 13:43:17.713384] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:18.290 [2024-11-20 13:43:17.713425] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:23:18.290 BaseBdev1 00:23:18.290 13:43:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:18.290 13:43:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:23:18.290 13:43:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2_malloc 00:23:18.290 13:43:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:18.290 13:43:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:18.290 BaseBdev2_malloc 00:23:18.290 13:43:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:18.290 13:43:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:23:18.290 13:43:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:18.290 13:43:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:18.290 [2024-11-20 13:43:17.767610] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:23:18.290 [2024-11-20 13:43:17.767680] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:18.290 [2024-11-20 13:43:17.767701] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:23:18.290 [2024-11-20 13:43:17.767718] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:18.290 [2024-11-20 13:43:17.769853] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:18.290 [2024-11-20 13:43:17.769896] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:23:18.290 BaseBdev2 00:23:18.290 13:43:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:18.290 13:43:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b spare_malloc 00:23:18.290 13:43:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:18.290 13:43:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:18.549 spare_malloc 00:23:18.549 13:43:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:18.549 13:43:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:23:18.549 13:43:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:18.549 13:43:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:18.549 spare_delay 00:23:18.549 13:43:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:18.549 13:43:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:23:18.549 13:43:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:18.549 13:43:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:18.549 [2024-11-20 13:43:17.843880] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:23:18.549 [2024-11-20 13:43:17.843945] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:18.549 [2024-11-20 13:43:17.843969] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:23:18.549 [2024-11-20 13:43:17.843983] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:18.549 [2024-11-20 13:43:17.846134] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:18.549 [2024-11-20 13:43:17.846324] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:23:18.549 spare 00:23:18.549 13:43:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:18.549 13:43:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:23:18.549 13:43:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:18.549 13:43:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:18.549 [2024-11-20 13:43:17.855937] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:18.549 [2024-11-20 13:43:17.857973] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:18.549 [2024-11-20 13:43:17.858309] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:23:18.549 [2024-11-20 13:43:17.858334] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:23:18.549 [2024-11-20 13:43:17.858427] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:23:18.549 [2024-11-20 13:43:17.858558] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:23:18.549 [2024-11-20 13:43:17.858569] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:23:18.549 [2024-11-20 13:43:17.858674] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:18.549 13:43:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:18.549 13:43:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:23:18.549 13:43:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:18.549 13:43:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:18.549 13:43:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:18.549 13:43:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:18.549 13:43:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:23:18.549 13:43:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:18.549 13:43:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:18.549 13:43:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:18.549 13:43:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:18.549 13:43:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:18.549 13:43:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:18.549 13:43:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:18.549 13:43:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:18.549 13:43:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:18.549 13:43:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:18.549 "name": "raid_bdev1", 00:23:18.549 "uuid": "b5ffa764-2f07-4013-9f10-22fd10f235a3", 00:23:18.549 "strip_size_kb": 0, 00:23:18.549 "state": "online", 00:23:18.549 "raid_level": "raid1", 00:23:18.549 "superblock": true, 00:23:18.549 "num_base_bdevs": 2, 00:23:18.549 "num_base_bdevs_discovered": 2, 00:23:18.549 "num_base_bdevs_operational": 2, 00:23:18.549 "base_bdevs_list": [ 00:23:18.549 { 00:23:18.549 "name": "BaseBdev1", 00:23:18.549 "uuid": "239502b1-55b8-52aa-a271-06f07093f8b5", 00:23:18.549 "is_configured": true, 00:23:18.549 "data_offset": 256, 00:23:18.549 "data_size": 7936 00:23:18.549 }, 00:23:18.549 { 00:23:18.549 "name": "BaseBdev2", 00:23:18.549 "uuid": "3e829f17-08e0-5386-8882-65b4a751a1f6", 00:23:18.549 "is_configured": true, 00:23:18.549 "data_offset": 256, 00:23:18.549 "data_size": 7936 00:23:18.549 } 00:23:18.549 ] 00:23:18.549 }' 00:23:18.549 13:43:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:18.549 13:43:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:18.809 13:43:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:23:18.809 13:43:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:18.809 13:43:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:18.809 13:43:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:23:18.809 [2024-11-20 13:43:18.275595] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:19.067 13:43:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:19.067 13:43:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:23:19.067 13:43:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:19.067 13:43:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:19.067 13:43:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:19.067 13:43:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:23:19.067 13:43:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:19.067 13:43:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:23:19.067 13:43:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:23:19.067 13:43:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:23:19.067 13:43:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:23:19.067 13:43:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:23:19.067 13:43:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:23:19.067 13:43:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:23:19.067 13:43:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:23:19.067 13:43:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:23:19.067 13:43:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:23:19.067 13:43:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:23:19.067 13:43:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:23:19.067 13:43:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:23:19.067 13:43:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:23:19.326 [2024-11-20 13:43:18.582989] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:23:19.326 /dev/nbd0 00:23:19.326 13:43:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:23:19.326 13:43:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:23:19.326 13:43:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:23:19.326 13:43:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # local i 00:23:19.326 13:43:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:23:19.326 13:43:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:23:19.326 13:43:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:23:19.326 13:43:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@877 -- # break 00:23:19.326 13:43:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:23:19.326 13:43:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:23:19.326 13:43:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:19.326 1+0 records in 00:23:19.326 1+0 records out 00:23:19.326 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000377472 s, 10.9 MB/s 00:23:19.326 13:43:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:19.326 13:43:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # size=4096 00:23:19.326 13:43:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:19.326 13:43:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:23:19.326 13:43:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@893 -- # return 0 00:23:19.326 13:43:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:19.326 13:43:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:23:19.326 13:43:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:23:19.326 13:43:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:23:19.326 13:43:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:23:20.260 7936+0 records in 00:23:20.260 7936+0 records out 00:23:20.260 32505856 bytes (33 MB, 31 MiB) copied, 0.719525 s, 45.2 MB/s 00:23:20.260 13:43:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:23:20.260 13:43:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:23:20.260 13:43:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:23:20.260 13:43:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:23:20.260 13:43:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:23:20.260 13:43:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:20.260 13:43:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:23:20.260 13:43:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:23:20.260 [2024-11-20 13:43:19.620288] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:20.260 13:43:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:23:20.260 13:43:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:23:20.260 13:43:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:20.260 13:43:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:20.260 13:43:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:23:20.260 13:43:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:23:20.260 13:43:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:23:20.261 13:43:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:23:20.261 13:43:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:20.261 13:43:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:20.261 [2024-11-20 13:43:19.637951] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:23:20.261 13:43:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:20.261 13:43:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:20.261 13:43:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:20.261 13:43:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:20.261 13:43:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:20.261 13:43:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:20.261 13:43:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:23:20.261 13:43:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:20.261 13:43:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:20.261 13:43:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:20.261 13:43:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:20.261 13:43:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:20.261 13:43:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:20.261 13:43:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:20.261 13:43:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:20.261 13:43:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:20.261 13:43:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:20.261 "name": "raid_bdev1", 00:23:20.261 "uuid": "b5ffa764-2f07-4013-9f10-22fd10f235a3", 00:23:20.261 "strip_size_kb": 0, 00:23:20.261 "state": "online", 00:23:20.261 "raid_level": "raid1", 00:23:20.261 "superblock": true, 00:23:20.261 "num_base_bdevs": 2, 00:23:20.261 "num_base_bdevs_discovered": 1, 00:23:20.261 "num_base_bdevs_operational": 1, 00:23:20.261 "base_bdevs_list": [ 00:23:20.261 { 00:23:20.261 "name": null, 00:23:20.261 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:20.261 "is_configured": false, 00:23:20.261 "data_offset": 0, 00:23:20.261 "data_size": 7936 00:23:20.261 }, 00:23:20.261 { 00:23:20.261 "name": "BaseBdev2", 00:23:20.261 "uuid": "3e829f17-08e0-5386-8882-65b4a751a1f6", 00:23:20.261 "is_configured": true, 00:23:20.261 "data_offset": 256, 00:23:20.261 "data_size": 7936 00:23:20.261 } 00:23:20.261 ] 00:23:20.261 }' 00:23:20.261 13:43:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:20.261 13:43:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:20.829 13:43:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:23:20.829 13:43:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:20.829 13:43:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:20.829 [2024-11-20 13:43:20.097310] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:20.829 [2024-11-20 13:43:20.112609] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d260 00:23:20.829 13:43:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:20.829 13:43:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@647 -- # sleep 1 00:23:20.829 [2024-11-20 13:43:20.114765] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:23:21.767 13:43:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:21.767 13:43:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:21.767 13:43:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:23:21.767 13:43:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:23:21.767 13:43:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:21.767 13:43:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:21.767 13:43:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:21.767 13:43:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:21.767 13:43:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:21.767 13:43:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:21.768 13:43:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:21.768 "name": "raid_bdev1", 00:23:21.768 "uuid": "b5ffa764-2f07-4013-9f10-22fd10f235a3", 00:23:21.768 "strip_size_kb": 0, 00:23:21.768 "state": "online", 00:23:21.768 "raid_level": "raid1", 00:23:21.768 "superblock": true, 00:23:21.768 "num_base_bdevs": 2, 00:23:21.768 "num_base_bdevs_discovered": 2, 00:23:21.768 "num_base_bdevs_operational": 2, 00:23:21.768 "process": { 00:23:21.768 "type": "rebuild", 00:23:21.768 "target": "spare", 00:23:21.768 "progress": { 00:23:21.768 "blocks": 2560, 00:23:21.768 "percent": 32 00:23:21.768 } 00:23:21.768 }, 00:23:21.768 "base_bdevs_list": [ 00:23:21.768 { 00:23:21.768 "name": "spare", 00:23:21.768 "uuid": "0b38f0d0-4ff6-5e3b-9677-90896cd2546a", 00:23:21.768 "is_configured": true, 00:23:21.768 "data_offset": 256, 00:23:21.768 "data_size": 7936 00:23:21.768 }, 00:23:21.768 { 00:23:21.768 "name": "BaseBdev2", 00:23:21.768 "uuid": "3e829f17-08e0-5386-8882-65b4a751a1f6", 00:23:21.768 "is_configured": true, 00:23:21.768 "data_offset": 256, 00:23:21.768 "data_size": 7936 00:23:21.768 } 00:23:21.768 ] 00:23:21.768 }' 00:23:21.768 13:43:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:21.768 13:43:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:21.768 13:43:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:22.028 13:43:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:23:22.028 13:43:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:23:22.028 13:43:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:22.028 13:43:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:22.028 [2024-11-20 13:43:21.263170] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:22.028 [2024-11-20 13:43:21.320188] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:23:22.028 [2024-11-20 13:43:21.320273] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:22.028 [2024-11-20 13:43:21.320289] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:22.028 [2024-11-20 13:43:21.320301] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:23:22.028 13:43:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:22.028 13:43:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:22.028 13:43:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:22.028 13:43:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:22.028 13:43:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:22.028 13:43:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:22.028 13:43:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:23:22.028 13:43:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:22.028 13:43:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:22.028 13:43:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:22.028 13:43:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:22.028 13:43:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:22.028 13:43:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:22.028 13:43:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:22.028 13:43:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:22.028 13:43:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:22.028 13:43:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:22.028 "name": "raid_bdev1", 00:23:22.028 "uuid": "b5ffa764-2f07-4013-9f10-22fd10f235a3", 00:23:22.028 "strip_size_kb": 0, 00:23:22.028 "state": "online", 00:23:22.028 "raid_level": "raid1", 00:23:22.028 "superblock": true, 00:23:22.028 "num_base_bdevs": 2, 00:23:22.028 "num_base_bdevs_discovered": 1, 00:23:22.028 "num_base_bdevs_operational": 1, 00:23:22.028 "base_bdevs_list": [ 00:23:22.028 { 00:23:22.028 "name": null, 00:23:22.028 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:22.028 "is_configured": false, 00:23:22.028 "data_offset": 0, 00:23:22.028 "data_size": 7936 00:23:22.028 }, 00:23:22.028 { 00:23:22.028 "name": "BaseBdev2", 00:23:22.028 "uuid": "3e829f17-08e0-5386-8882-65b4a751a1f6", 00:23:22.028 "is_configured": true, 00:23:22.028 "data_offset": 256, 00:23:22.028 "data_size": 7936 00:23:22.028 } 00:23:22.028 ] 00:23:22.028 }' 00:23:22.028 13:43:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:22.028 13:43:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:22.288 13:43:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:22.288 13:43:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:22.288 13:43:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:23:22.288 13:43:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:23:22.288 13:43:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:22.288 13:43:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:22.288 13:43:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:22.288 13:43:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:22.288 13:43:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:22.548 13:43:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:22.548 13:43:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:22.548 "name": "raid_bdev1", 00:23:22.548 "uuid": "b5ffa764-2f07-4013-9f10-22fd10f235a3", 00:23:22.548 "strip_size_kb": 0, 00:23:22.548 "state": "online", 00:23:22.548 "raid_level": "raid1", 00:23:22.548 "superblock": true, 00:23:22.548 "num_base_bdevs": 2, 00:23:22.548 "num_base_bdevs_discovered": 1, 00:23:22.548 "num_base_bdevs_operational": 1, 00:23:22.548 "base_bdevs_list": [ 00:23:22.548 { 00:23:22.548 "name": null, 00:23:22.548 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:22.548 "is_configured": false, 00:23:22.548 "data_offset": 0, 00:23:22.548 "data_size": 7936 00:23:22.548 }, 00:23:22.548 { 00:23:22.548 "name": "BaseBdev2", 00:23:22.548 "uuid": "3e829f17-08e0-5386-8882-65b4a751a1f6", 00:23:22.548 "is_configured": true, 00:23:22.548 "data_offset": 256, 00:23:22.548 "data_size": 7936 00:23:22.548 } 00:23:22.548 ] 00:23:22.548 }' 00:23:22.548 13:43:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:22.548 13:43:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:23:22.548 13:43:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:22.548 13:43:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:23:22.548 13:43:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:23:22.548 13:43:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:22.548 13:43:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:22.548 [2024-11-20 13:43:21.887946] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:22.548 [2024-11-20 13:43:21.902465] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d330 00:23:22.548 13:43:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:22.548 13:43:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@663 -- # sleep 1 00:23:22.548 [2024-11-20 13:43:21.904633] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:23:23.487 13:43:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:23.487 13:43:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:23.487 13:43:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:23:23.487 13:43:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:23:23.487 13:43:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:23.487 13:43:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:23.487 13:43:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:23.487 13:43:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:23.487 13:43:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:23.487 13:43:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:23.487 13:43:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:23.487 "name": "raid_bdev1", 00:23:23.487 "uuid": "b5ffa764-2f07-4013-9f10-22fd10f235a3", 00:23:23.487 "strip_size_kb": 0, 00:23:23.487 "state": "online", 00:23:23.487 "raid_level": "raid1", 00:23:23.487 "superblock": true, 00:23:23.487 "num_base_bdevs": 2, 00:23:23.487 "num_base_bdevs_discovered": 2, 00:23:23.487 "num_base_bdevs_operational": 2, 00:23:23.487 "process": { 00:23:23.487 "type": "rebuild", 00:23:23.487 "target": "spare", 00:23:23.487 "progress": { 00:23:23.487 "blocks": 2560, 00:23:23.487 "percent": 32 00:23:23.487 } 00:23:23.487 }, 00:23:23.487 "base_bdevs_list": [ 00:23:23.487 { 00:23:23.487 "name": "spare", 00:23:23.487 "uuid": "0b38f0d0-4ff6-5e3b-9677-90896cd2546a", 00:23:23.487 "is_configured": true, 00:23:23.487 "data_offset": 256, 00:23:23.487 "data_size": 7936 00:23:23.487 }, 00:23:23.487 { 00:23:23.487 "name": "BaseBdev2", 00:23:23.487 "uuid": "3e829f17-08e0-5386-8882-65b4a751a1f6", 00:23:23.487 "is_configured": true, 00:23:23.487 "data_offset": 256, 00:23:23.487 "data_size": 7936 00:23:23.487 } 00:23:23.487 ] 00:23:23.487 }' 00:23:23.487 13:43:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:23.748 13:43:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:23.748 13:43:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:23.748 13:43:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:23:23.748 13:43:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:23:23.748 13:43:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:23:23.748 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:23:23.748 13:43:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:23:23.748 13:43:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:23:23.748 13:43:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:23:23.748 13:43:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@706 -- # local timeout=711 00:23:23.748 13:43:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:23:23.748 13:43:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:23.748 13:43:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:23.748 13:43:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:23:23.748 13:43:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:23:23.748 13:43:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:23.748 13:43:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:23.748 13:43:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:23.748 13:43:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:23.748 13:43:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:23.748 13:43:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:23.748 13:43:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:23.748 "name": "raid_bdev1", 00:23:23.748 "uuid": "b5ffa764-2f07-4013-9f10-22fd10f235a3", 00:23:23.748 "strip_size_kb": 0, 00:23:23.748 "state": "online", 00:23:23.748 "raid_level": "raid1", 00:23:23.748 "superblock": true, 00:23:23.748 "num_base_bdevs": 2, 00:23:23.748 "num_base_bdevs_discovered": 2, 00:23:23.748 "num_base_bdevs_operational": 2, 00:23:23.748 "process": { 00:23:23.748 "type": "rebuild", 00:23:23.748 "target": "spare", 00:23:23.748 "progress": { 00:23:23.748 "blocks": 2816, 00:23:23.748 "percent": 35 00:23:23.748 } 00:23:23.748 }, 00:23:23.748 "base_bdevs_list": [ 00:23:23.748 { 00:23:23.748 "name": "spare", 00:23:23.748 "uuid": "0b38f0d0-4ff6-5e3b-9677-90896cd2546a", 00:23:23.748 "is_configured": true, 00:23:23.748 "data_offset": 256, 00:23:23.748 "data_size": 7936 00:23:23.748 }, 00:23:23.748 { 00:23:23.748 "name": "BaseBdev2", 00:23:23.748 "uuid": "3e829f17-08e0-5386-8882-65b4a751a1f6", 00:23:23.748 "is_configured": true, 00:23:23.748 "data_offset": 256, 00:23:23.748 "data_size": 7936 00:23:23.748 } 00:23:23.748 ] 00:23:23.748 }' 00:23:23.748 13:43:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:23.748 13:43:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:23.748 13:43:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:23.748 13:43:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:23:23.748 13:43:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 00:23:24.697 13:43:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:23:24.697 13:43:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:24.697 13:43:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:24.697 13:43:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:23:24.697 13:43:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:23:24.697 13:43:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:24.698 13:43:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:24.698 13:43:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:24.698 13:43:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:24.698 13:43:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:24.957 13:43:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:24.957 13:43:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:24.957 "name": "raid_bdev1", 00:23:24.957 "uuid": "b5ffa764-2f07-4013-9f10-22fd10f235a3", 00:23:24.957 "strip_size_kb": 0, 00:23:24.957 "state": "online", 00:23:24.957 "raid_level": "raid1", 00:23:24.957 "superblock": true, 00:23:24.957 "num_base_bdevs": 2, 00:23:24.957 "num_base_bdevs_discovered": 2, 00:23:24.957 "num_base_bdevs_operational": 2, 00:23:24.957 "process": { 00:23:24.957 "type": "rebuild", 00:23:24.957 "target": "spare", 00:23:24.957 "progress": { 00:23:24.957 "blocks": 5632, 00:23:24.957 "percent": 70 00:23:24.957 } 00:23:24.957 }, 00:23:24.957 "base_bdevs_list": [ 00:23:24.957 { 00:23:24.957 "name": "spare", 00:23:24.957 "uuid": "0b38f0d0-4ff6-5e3b-9677-90896cd2546a", 00:23:24.957 "is_configured": true, 00:23:24.957 "data_offset": 256, 00:23:24.957 "data_size": 7936 00:23:24.957 }, 00:23:24.957 { 00:23:24.957 "name": "BaseBdev2", 00:23:24.957 "uuid": "3e829f17-08e0-5386-8882-65b4a751a1f6", 00:23:24.957 "is_configured": true, 00:23:24.957 "data_offset": 256, 00:23:24.957 "data_size": 7936 00:23:24.957 } 00:23:24.957 ] 00:23:24.957 }' 00:23:24.957 13:43:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:24.957 13:43:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:24.957 13:43:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:24.957 13:43:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:23:24.957 13:43:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 00:23:25.893 [2024-11-20 13:43:25.019090] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:23:25.893 [2024-11-20 13:43:25.019201] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:23:25.893 [2024-11-20 13:43:25.019335] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:25.893 13:43:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:23:25.893 13:43:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:25.893 13:43:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:25.893 13:43:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:23:25.893 13:43:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:23:25.893 13:43:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:25.893 13:43:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:25.893 13:43:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:25.893 13:43:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:25.893 13:43:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:25.893 13:43:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:25.893 13:43:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:25.893 "name": "raid_bdev1", 00:23:25.893 "uuid": "b5ffa764-2f07-4013-9f10-22fd10f235a3", 00:23:25.893 "strip_size_kb": 0, 00:23:25.893 "state": "online", 00:23:25.893 "raid_level": "raid1", 00:23:25.893 "superblock": true, 00:23:25.893 "num_base_bdevs": 2, 00:23:25.893 "num_base_bdevs_discovered": 2, 00:23:25.893 "num_base_bdevs_operational": 2, 00:23:25.893 "base_bdevs_list": [ 00:23:25.893 { 00:23:25.893 "name": "spare", 00:23:25.893 "uuid": "0b38f0d0-4ff6-5e3b-9677-90896cd2546a", 00:23:25.893 "is_configured": true, 00:23:25.893 "data_offset": 256, 00:23:25.893 "data_size": 7936 00:23:25.893 }, 00:23:25.893 { 00:23:25.893 "name": "BaseBdev2", 00:23:25.893 "uuid": "3e829f17-08e0-5386-8882-65b4a751a1f6", 00:23:25.893 "is_configured": true, 00:23:25.893 "data_offset": 256, 00:23:25.893 "data_size": 7936 00:23:25.893 } 00:23:25.893 ] 00:23:25.893 }' 00:23:25.893 13:43:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:26.153 13:43:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:23:26.153 13:43:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:26.153 13:43:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:23:26.153 13:43:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@709 -- # break 00:23:26.153 13:43:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:26.153 13:43:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:26.153 13:43:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:23:26.153 13:43:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:23:26.153 13:43:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:26.153 13:43:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:26.153 13:43:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:26.153 13:43:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:26.153 13:43:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:26.153 13:43:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:26.153 13:43:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:26.153 "name": "raid_bdev1", 00:23:26.153 "uuid": "b5ffa764-2f07-4013-9f10-22fd10f235a3", 00:23:26.153 "strip_size_kb": 0, 00:23:26.153 "state": "online", 00:23:26.153 "raid_level": "raid1", 00:23:26.153 "superblock": true, 00:23:26.153 "num_base_bdevs": 2, 00:23:26.153 "num_base_bdevs_discovered": 2, 00:23:26.153 "num_base_bdevs_operational": 2, 00:23:26.153 "base_bdevs_list": [ 00:23:26.153 { 00:23:26.153 "name": "spare", 00:23:26.153 "uuid": "0b38f0d0-4ff6-5e3b-9677-90896cd2546a", 00:23:26.153 "is_configured": true, 00:23:26.153 "data_offset": 256, 00:23:26.153 "data_size": 7936 00:23:26.153 }, 00:23:26.153 { 00:23:26.153 "name": "BaseBdev2", 00:23:26.153 "uuid": "3e829f17-08e0-5386-8882-65b4a751a1f6", 00:23:26.153 "is_configured": true, 00:23:26.153 "data_offset": 256, 00:23:26.153 "data_size": 7936 00:23:26.153 } 00:23:26.153 ] 00:23:26.153 }' 00:23:26.153 13:43:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:26.153 13:43:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:23:26.153 13:43:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:26.153 13:43:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:23:26.153 13:43:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:23:26.153 13:43:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:26.153 13:43:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:26.153 13:43:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:26.153 13:43:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:26.153 13:43:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:23:26.153 13:43:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:26.153 13:43:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:26.153 13:43:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:26.153 13:43:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:26.153 13:43:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:26.153 13:43:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:26.153 13:43:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:26.153 13:43:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:26.153 13:43:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:26.414 13:43:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:26.414 "name": "raid_bdev1", 00:23:26.414 "uuid": "b5ffa764-2f07-4013-9f10-22fd10f235a3", 00:23:26.414 "strip_size_kb": 0, 00:23:26.414 "state": "online", 00:23:26.414 "raid_level": "raid1", 00:23:26.414 "superblock": true, 00:23:26.414 "num_base_bdevs": 2, 00:23:26.414 "num_base_bdevs_discovered": 2, 00:23:26.414 "num_base_bdevs_operational": 2, 00:23:26.414 "base_bdevs_list": [ 00:23:26.414 { 00:23:26.414 "name": "spare", 00:23:26.414 "uuid": "0b38f0d0-4ff6-5e3b-9677-90896cd2546a", 00:23:26.414 "is_configured": true, 00:23:26.414 "data_offset": 256, 00:23:26.414 "data_size": 7936 00:23:26.414 }, 00:23:26.414 { 00:23:26.414 "name": "BaseBdev2", 00:23:26.414 "uuid": "3e829f17-08e0-5386-8882-65b4a751a1f6", 00:23:26.414 "is_configured": true, 00:23:26.414 "data_offset": 256, 00:23:26.414 "data_size": 7936 00:23:26.414 } 00:23:26.414 ] 00:23:26.414 }' 00:23:26.414 13:43:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:26.414 13:43:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:26.673 13:43:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:23:26.673 13:43:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:26.673 13:43:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:26.673 [2024-11-20 13:43:26.022425] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:26.673 [2024-11-20 13:43:26.022477] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:26.673 [2024-11-20 13:43:26.022571] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:26.673 [2024-11-20 13:43:26.022644] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:26.673 [2024-11-20 13:43:26.022657] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:23:26.673 13:43:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:26.673 13:43:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # jq length 00:23:26.673 13:43:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:26.673 13:43:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:26.673 13:43:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:26.673 13:43:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:26.673 13:43:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:23:26.673 13:43:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:23:26.673 13:43:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:23:26.673 13:43:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:23:26.673 13:43:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:23:26.673 13:43:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:23:26.673 13:43:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:23:26.673 13:43:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:23:26.673 13:43:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:23:26.673 13:43:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:23:26.673 13:43:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:23:26.673 13:43:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:23:26.674 13:43:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:23:26.933 /dev/nbd0 00:23:26.933 13:43:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:23:26.933 13:43:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:23:26.933 13:43:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:23:26.933 13:43:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # local i 00:23:26.933 13:43:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:23:26.933 13:43:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:23:26.933 13:43:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:23:26.933 13:43:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@877 -- # break 00:23:26.933 13:43:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:23:26.933 13:43:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:23:26.933 13:43:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:26.933 1+0 records in 00:23:26.933 1+0 records out 00:23:26.933 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000289339 s, 14.2 MB/s 00:23:26.933 13:43:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:26.933 13:43:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # size=4096 00:23:26.933 13:43:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:26.933 13:43:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:23:26.933 13:43:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@893 -- # return 0 00:23:26.933 13:43:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:26.933 13:43:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:23:26.933 13:43:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:23:27.192 /dev/nbd1 00:23:27.192 13:43:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:23:27.192 13:43:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:23:27.192 13:43:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:23:27.192 13:43:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # local i 00:23:27.192 13:43:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:23:27.192 13:43:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:23:27.192 13:43:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:23:27.192 13:43:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@877 -- # break 00:23:27.192 13:43:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:23:27.192 13:43:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:23:27.192 13:43:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:27.192 1+0 records in 00:23:27.192 1+0 records out 00:23:27.192 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000503623 s, 8.1 MB/s 00:23:27.192 13:43:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:27.193 13:43:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # size=4096 00:23:27.193 13:43:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:27.193 13:43:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:23:27.193 13:43:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@893 -- # return 0 00:23:27.193 13:43:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:27.193 13:43:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:23:27.193 13:43:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:23:27.452 13:43:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:23:27.452 13:43:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:23:27.452 13:43:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:23:27.452 13:43:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:23:27.452 13:43:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:23:27.452 13:43:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:27.452 13:43:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:23:27.711 13:43:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:23:27.711 13:43:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:23:27.711 13:43:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:23:27.711 13:43:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:27.711 13:43:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:27.711 13:43:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:23:27.711 13:43:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:23:27.711 13:43:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:23:27.711 13:43:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:27.711 13:43:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:23:27.971 13:43:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:23:27.971 13:43:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:23:27.971 13:43:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:23:27.971 13:43:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:27.971 13:43:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:27.971 13:43:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:23:27.971 13:43:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:23:27.971 13:43:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:23:27.971 13:43:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:23:27.971 13:43:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:23:27.971 13:43:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:27.971 13:43:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:27.971 13:43:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:27.971 13:43:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:23:27.971 13:43:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:27.971 13:43:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:27.971 [2024-11-20 13:43:27.279756] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:23:27.971 [2024-11-20 13:43:27.279950] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:27.971 [2024-11-20 13:43:27.279988] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:23:27.971 [2024-11-20 13:43:27.280000] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:27.971 [2024-11-20 13:43:27.282289] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:27.971 [2024-11-20 13:43:27.282442] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:23:27.971 [2024-11-20 13:43:27.282538] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:23:27.971 [2024-11-20 13:43:27.282621] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:27.971 [2024-11-20 13:43:27.282762] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:27.971 spare 00:23:27.971 13:43:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:27.971 13:43:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:23:27.971 13:43:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:27.971 13:43:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:27.971 [2024-11-20 13:43:27.382685] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:23:27.971 [2024-11-20 13:43:27.382919] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:23:27.971 [2024-11-20 13:43:27.383109] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1b50 00:23:27.971 [2024-11-20 13:43:27.383368] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:23:27.971 [2024-11-20 13:43:27.383485] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:23:27.971 [2024-11-20 13:43:27.383744] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:27.971 13:43:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:27.971 13:43:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:23:27.971 13:43:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:27.971 13:43:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:27.971 13:43:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:27.971 13:43:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:27.971 13:43:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:23:27.971 13:43:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:27.971 13:43:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:27.971 13:43:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:27.971 13:43:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:27.971 13:43:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:27.971 13:43:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:27.971 13:43:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:27.971 13:43:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:27.971 13:43:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:27.971 13:43:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:27.971 "name": "raid_bdev1", 00:23:27.971 "uuid": "b5ffa764-2f07-4013-9f10-22fd10f235a3", 00:23:27.971 "strip_size_kb": 0, 00:23:27.971 "state": "online", 00:23:27.971 "raid_level": "raid1", 00:23:27.971 "superblock": true, 00:23:27.971 "num_base_bdevs": 2, 00:23:27.971 "num_base_bdevs_discovered": 2, 00:23:27.971 "num_base_bdevs_operational": 2, 00:23:27.971 "base_bdevs_list": [ 00:23:27.971 { 00:23:27.971 "name": "spare", 00:23:27.971 "uuid": "0b38f0d0-4ff6-5e3b-9677-90896cd2546a", 00:23:27.971 "is_configured": true, 00:23:27.971 "data_offset": 256, 00:23:27.971 "data_size": 7936 00:23:27.971 }, 00:23:27.971 { 00:23:27.971 "name": "BaseBdev2", 00:23:27.971 "uuid": "3e829f17-08e0-5386-8882-65b4a751a1f6", 00:23:27.971 "is_configured": true, 00:23:27.971 "data_offset": 256, 00:23:27.971 "data_size": 7936 00:23:27.971 } 00:23:27.971 ] 00:23:27.971 }' 00:23:27.971 13:43:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:27.971 13:43:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:28.581 13:43:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:28.581 13:43:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:28.581 13:43:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:23:28.581 13:43:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:23:28.581 13:43:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:28.581 13:43:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:28.581 13:43:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:28.581 13:43:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:28.581 13:43:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:28.581 13:43:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:28.581 13:43:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:28.581 "name": "raid_bdev1", 00:23:28.581 "uuid": "b5ffa764-2f07-4013-9f10-22fd10f235a3", 00:23:28.581 "strip_size_kb": 0, 00:23:28.581 "state": "online", 00:23:28.581 "raid_level": "raid1", 00:23:28.581 "superblock": true, 00:23:28.581 "num_base_bdevs": 2, 00:23:28.581 "num_base_bdevs_discovered": 2, 00:23:28.581 "num_base_bdevs_operational": 2, 00:23:28.581 "base_bdevs_list": [ 00:23:28.581 { 00:23:28.581 "name": "spare", 00:23:28.581 "uuid": "0b38f0d0-4ff6-5e3b-9677-90896cd2546a", 00:23:28.581 "is_configured": true, 00:23:28.581 "data_offset": 256, 00:23:28.581 "data_size": 7936 00:23:28.581 }, 00:23:28.581 { 00:23:28.581 "name": "BaseBdev2", 00:23:28.581 "uuid": "3e829f17-08e0-5386-8882-65b4a751a1f6", 00:23:28.581 "is_configured": true, 00:23:28.581 "data_offset": 256, 00:23:28.581 "data_size": 7936 00:23:28.581 } 00:23:28.581 ] 00:23:28.581 }' 00:23:28.581 13:43:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:28.581 13:43:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:23:28.581 13:43:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:28.581 13:43:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:23:28.581 13:43:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:28.581 13:43:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:28.581 13:43:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:28.581 13:43:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:23:28.581 13:43:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:28.581 13:43:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:23:28.581 13:43:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:23:28.581 13:43:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:28.581 13:43:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:28.581 [2024-11-20 13:43:28.030857] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:28.581 13:43:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:28.581 13:43:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:28.581 13:43:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:28.581 13:43:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:28.581 13:43:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:28.581 13:43:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:28.581 13:43:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:23:28.581 13:43:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:28.581 13:43:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:28.581 13:43:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:28.581 13:43:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:28.581 13:43:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:28.581 13:43:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:28.581 13:43:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:28.581 13:43:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:28.581 13:43:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:28.840 13:43:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:28.840 "name": "raid_bdev1", 00:23:28.840 "uuid": "b5ffa764-2f07-4013-9f10-22fd10f235a3", 00:23:28.840 "strip_size_kb": 0, 00:23:28.840 "state": "online", 00:23:28.840 "raid_level": "raid1", 00:23:28.840 "superblock": true, 00:23:28.840 "num_base_bdevs": 2, 00:23:28.840 "num_base_bdevs_discovered": 1, 00:23:28.840 "num_base_bdevs_operational": 1, 00:23:28.840 "base_bdevs_list": [ 00:23:28.840 { 00:23:28.840 "name": null, 00:23:28.840 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:28.840 "is_configured": false, 00:23:28.840 "data_offset": 0, 00:23:28.840 "data_size": 7936 00:23:28.840 }, 00:23:28.840 { 00:23:28.840 "name": "BaseBdev2", 00:23:28.840 "uuid": "3e829f17-08e0-5386-8882-65b4a751a1f6", 00:23:28.840 "is_configured": true, 00:23:28.840 "data_offset": 256, 00:23:28.840 "data_size": 7936 00:23:28.840 } 00:23:28.840 ] 00:23:28.840 }' 00:23:28.840 13:43:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:28.840 13:43:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:29.099 13:43:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:23:29.099 13:43:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:29.099 13:43:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:29.099 [2024-11-20 13:43:28.462435] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:29.099 [2024-11-20 13:43:28.462774] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:23:29.099 [2024-11-20 13:43:28.462802] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:23:29.099 [2024-11-20 13:43:28.462847] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:29.099 [2024-11-20 13:43:28.477365] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1c20 00:23:29.099 13:43:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:29.099 13:43:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@757 -- # sleep 1 00:23:29.099 [2024-11-20 13:43:28.479525] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:23:30.035 13:43:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:30.035 13:43:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:30.035 13:43:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:23:30.035 13:43:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:23:30.035 13:43:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:30.035 13:43:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:30.035 13:43:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:30.035 13:43:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:30.035 13:43:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:30.035 13:43:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:30.294 13:43:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:30.294 "name": "raid_bdev1", 00:23:30.294 "uuid": "b5ffa764-2f07-4013-9f10-22fd10f235a3", 00:23:30.294 "strip_size_kb": 0, 00:23:30.294 "state": "online", 00:23:30.294 "raid_level": "raid1", 00:23:30.294 "superblock": true, 00:23:30.294 "num_base_bdevs": 2, 00:23:30.294 "num_base_bdevs_discovered": 2, 00:23:30.294 "num_base_bdevs_operational": 2, 00:23:30.294 "process": { 00:23:30.294 "type": "rebuild", 00:23:30.294 "target": "spare", 00:23:30.294 "progress": { 00:23:30.294 "blocks": 2560, 00:23:30.294 "percent": 32 00:23:30.294 } 00:23:30.294 }, 00:23:30.294 "base_bdevs_list": [ 00:23:30.294 { 00:23:30.294 "name": "spare", 00:23:30.294 "uuid": "0b38f0d0-4ff6-5e3b-9677-90896cd2546a", 00:23:30.294 "is_configured": true, 00:23:30.294 "data_offset": 256, 00:23:30.294 "data_size": 7936 00:23:30.294 }, 00:23:30.294 { 00:23:30.294 "name": "BaseBdev2", 00:23:30.294 "uuid": "3e829f17-08e0-5386-8882-65b4a751a1f6", 00:23:30.294 "is_configured": true, 00:23:30.294 "data_offset": 256, 00:23:30.295 "data_size": 7936 00:23:30.295 } 00:23:30.295 ] 00:23:30.295 }' 00:23:30.295 13:43:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:30.295 13:43:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:30.295 13:43:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:30.295 13:43:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:23:30.295 13:43:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:23:30.295 13:43:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:30.295 13:43:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:30.295 [2024-11-20 13:43:29.628245] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:30.295 [2024-11-20 13:43:29.684961] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:23:30.295 [2024-11-20 13:43:29.685028] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:30.295 [2024-11-20 13:43:29.685045] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:30.295 [2024-11-20 13:43:29.685080] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:23:30.295 13:43:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:30.295 13:43:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:30.295 13:43:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:30.295 13:43:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:30.295 13:43:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:30.295 13:43:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:30.295 13:43:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:23:30.295 13:43:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:30.295 13:43:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:30.295 13:43:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:30.295 13:43:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:30.295 13:43:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:30.295 13:43:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:30.295 13:43:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:30.295 13:43:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:30.295 13:43:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:30.295 13:43:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:30.295 "name": "raid_bdev1", 00:23:30.295 "uuid": "b5ffa764-2f07-4013-9f10-22fd10f235a3", 00:23:30.295 "strip_size_kb": 0, 00:23:30.295 "state": "online", 00:23:30.295 "raid_level": "raid1", 00:23:30.295 "superblock": true, 00:23:30.295 "num_base_bdevs": 2, 00:23:30.295 "num_base_bdevs_discovered": 1, 00:23:30.295 "num_base_bdevs_operational": 1, 00:23:30.295 "base_bdevs_list": [ 00:23:30.295 { 00:23:30.295 "name": null, 00:23:30.295 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:30.295 "is_configured": false, 00:23:30.295 "data_offset": 0, 00:23:30.295 "data_size": 7936 00:23:30.295 }, 00:23:30.295 { 00:23:30.295 "name": "BaseBdev2", 00:23:30.295 "uuid": "3e829f17-08e0-5386-8882-65b4a751a1f6", 00:23:30.295 "is_configured": true, 00:23:30.295 "data_offset": 256, 00:23:30.295 "data_size": 7936 00:23:30.295 } 00:23:30.295 ] 00:23:30.295 }' 00:23:30.295 13:43:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:30.295 13:43:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:30.863 13:43:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:23:30.863 13:43:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:30.863 13:43:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:30.863 [2024-11-20 13:43:30.117208] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:23:30.863 [2024-11-20 13:43:30.117404] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:30.863 [2024-11-20 13:43:30.117464] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:23:30.863 [2024-11-20 13:43:30.117586] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:30.863 [2024-11-20 13:43:30.117856] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:30.863 [2024-11-20 13:43:30.117876] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:23:30.863 [2024-11-20 13:43:30.117941] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:23:30.863 [2024-11-20 13:43:30.117957] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:23:30.863 [2024-11-20 13:43:30.117969] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:23:30.863 [2024-11-20 13:43:30.117991] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:30.863 [2024-11-20 13:43:30.131761] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1cf0 00:23:30.863 spare 00:23:30.863 13:43:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:30.863 13:43:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@764 -- # sleep 1 00:23:30.863 [2024-11-20 13:43:30.133868] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:23:31.799 13:43:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:31.799 13:43:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:31.799 13:43:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:23:31.799 13:43:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:23:31.799 13:43:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:31.799 13:43:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:31.799 13:43:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:31.799 13:43:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:31.799 13:43:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:31.799 13:43:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:31.799 13:43:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:31.799 "name": "raid_bdev1", 00:23:31.799 "uuid": "b5ffa764-2f07-4013-9f10-22fd10f235a3", 00:23:31.799 "strip_size_kb": 0, 00:23:31.799 "state": "online", 00:23:31.799 "raid_level": "raid1", 00:23:31.799 "superblock": true, 00:23:31.799 "num_base_bdevs": 2, 00:23:31.799 "num_base_bdevs_discovered": 2, 00:23:31.799 "num_base_bdevs_operational": 2, 00:23:31.799 "process": { 00:23:31.799 "type": "rebuild", 00:23:31.799 "target": "spare", 00:23:31.799 "progress": { 00:23:31.799 "blocks": 2560, 00:23:31.799 "percent": 32 00:23:31.799 } 00:23:31.799 }, 00:23:31.799 "base_bdevs_list": [ 00:23:31.799 { 00:23:31.799 "name": "spare", 00:23:31.799 "uuid": "0b38f0d0-4ff6-5e3b-9677-90896cd2546a", 00:23:31.799 "is_configured": true, 00:23:31.799 "data_offset": 256, 00:23:31.799 "data_size": 7936 00:23:31.799 }, 00:23:31.799 { 00:23:31.799 "name": "BaseBdev2", 00:23:31.799 "uuid": "3e829f17-08e0-5386-8882-65b4a751a1f6", 00:23:31.799 "is_configured": true, 00:23:31.799 "data_offset": 256, 00:23:31.799 "data_size": 7936 00:23:31.799 } 00:23:31.799 ] 00:23:31.799 }' 00:23:31.799 13:43:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:31.799 13:43:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:31.799 13:43:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:31.799 13:43:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:23:31.799 13:43:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:23:31.800 13:43:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:31.800 13:43:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:32.059 [2024-11-20 13:43:31.286621] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:32.059 [2024-11-20 13:43:31.339004] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:23:32.059 [2024-11-20 13:43:31.339234] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:32.059 [2024-11-20 13:43:31.339263] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:32.059 [2024-11-20 13:43:31.339273] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:23:32.059 13:43:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:32.059 13:43:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:32.059 13:43:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:32.059 13:43:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:32.059 13:43:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:32.059 13:43:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:32.059 13:43:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:23:32.059 13:43:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:32.059 13:43:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:32.059 13:43:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:32.059 13:43:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:32.059 13:43:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:32.059 13:43:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:32.059 13:43:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:32.059 13:43:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:32.059 13:43:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:32.059 13:43:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:32.059 "name": "raid_bdev1", 00:23:32.059 "uuid": "b5ffa764-2f07-4013-9f10-22fd10f235a3", 00:23:32.059 "strip_size_kb": 0, 00:23:32.059 "state": "online", 00:23:32.059 "raid_level": "raid1", 00:23:32.059 "superblock": true, 00:23:32.059 "num_base_bdevs": 2, 00:23:32.059 "num_base_bdevs_discovered": 1, 00:23:32.059 "num_base_bdevs_operational": 1, 00:23:32.059 "base_bdevs_list": [ 00:23:32.059 { 00:23:32.059 "name": null, 00:23:32.059 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:32.059 "is_configured": false, 00:23:32.059 "data_offset": 0, 00:23:32.059 "data_size": 7936 00:23:32.059 }, 00:23:32.059 { 00:23:32.059 "name": "BaseBdev2", 00:23:32.059 "uuid": "3e829f17-08e0-5386-8882-65b4a751a1f6", 00:23:32.059 "is_configured": true, 00:23:32.059 "data_offset": 256, 00:23:32.059 "data_size": 7936 00:23:32.059 } 00:23:32.059 ] 00:23:32.059 }' 00:23:32.059 13:43:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:32.059 13:43:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:32.319 13:43:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:32.319 13:43:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:32.319 13:43:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:23:32.319 13:43:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:23:32.319 13:43:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:32.319 13:43:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:32.319 13:43:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:32.319 13:43:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:32.319 13:43:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:32.578 13:43:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:32.578 13:43:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:32.578 "name": "raid_bdev1", 00:23:32.578 "uuid": "b5ffa764-2f07-4013-9f10-22fd10f235a3", 00:23:32.578 "strip_size_kb": 0, 00:23:32.578 "state": "online", 00:23:32.578 "raid_level": "raid1", 00:23:32.578 "superblock": true, 00:23:32.578 "num_base_bdevs": 2, 00:23:32.578 "num_base_bdevs_discovered": 1, 00:23:32.578 "num_base_bdevs_operational": 1, 00:23:32.578 "base_bdevs_list": [ 00:23:32.578 { 00:23:32.578 "name": null, 00:23:32.578 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:32.578 "is_configured": false, 00:23:32.578 "data_offset": 0, 00:23:32.578 "data_size": 7936 00:23:32.578 }, 00:23:32.578 { 00:23:32.578 "name": "BaseBdev2", 00:23:32.578 "uuid": "3e829f17-08e0-5386-8882-65b4a751a1f6", 00:23:32.578 "is_configured": true, 00:23:32.578 "data_offset": 256, 00:23:32.578 "data_size": 7936 00:23:32.578 } 00:23:32.578 ] 00:23:32.578 }' 00:23:32.578 13:43:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:32.578 13:43:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:23:32.578 13:43:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:32.578 13:43:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:23:32.578 13:43:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:23:32.578 13:43:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:32.578 13:43:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:32.578 13:43:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:32.578 13:43:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:23:32.578 13:43:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:32.578 13:43:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:32.578 [2024-11-20 13:43:31.931185] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:23:32.579 [2024-11-20 13:43:31.931254] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:32.579 [2024-11-20 13:43:31.931280] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:23:32.579 [2024-11-20 13:43:31.931292] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:32.579 [2024-11-20 13:43:31.931524] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:32.579 [2024-11-20 13:43:31.931540] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:23:32.579 [2024-11-20 13:43:31.931596] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:23:32.579 [2024-11-20 13:43:31.931610] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:23:32.579 [2024-11-20 13:43:31.931626] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:23:32.579 [2024-11-20 13:43:31.931638] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:23:32.579 BaseBdev1 00:23:32.579 13:43:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:32.579 13:43:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@775 -- # sleep 1 00:23:33.514 13:43:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:33.514 13:43:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:33.514 13:43:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:33.514 13:43:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:33.514 13:43:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:33.514 13:43:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:23:33.515 13:43:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:33.515 13:43:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:33.515 13:43:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:33.515 13:43:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:33.515 13:43:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:33.515 13:43:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:33.515 13:43:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:33.515 13:43:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:33.515 13:43:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:33.515 13:43:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:33.515 "name": "raid_bdev1", 00:23:33.515 "uuid": "b5ffa764-2f07-4013-9f10-22fd10f235a3", 00:23:33.515 "strip_size_kb": 0, 00:23:33.515 "state": "online", 00:23:33.515 "raid_level": "raid1", 00:23:33.515 "superblock": true, 00:23:33.515 "num_base_bdevs": 2, 00:23:33.515 "num_base_bdevs_discovered": 1, 00:23:33.515 "num_base_bdevs_operational": 1, 00:23:33.515 "base_bdevs_list": [ 00:23:33.515 { 00:23:33.515 "name": null, 00:23:33.515 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:33.515 "is_configured": false, 00:23:33.515 "data_offset": 0, 00:23:33.515 "data_size": 7936 00:23:33.515 }, 00:23:33.515 { 00:23:33.515 "name": "BaseBdev2", 00:23:33.515 "uuid": "3e829f17-08e0-5386-8882-65b4a751a1f6", 00:23:33.515 "is_configured": true, 00:23:33.515 "data_offset": 256, 00:23:33.515 "data_size": 7936 00:23:33.515 } 00:23:33.515 ] 00:23:33.515 }' 00:23:33.515 13:43:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:33.515 13:43:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:34.084 13:43:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:34.084 13:43:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:34.084 13:43:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:23:34.084 13:43:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:23:34.084 13:43:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:34.084 13:43:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:34.084 13:43:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:34.084 13:43:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:34.084 13:43:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:34.084 13:43:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:34.084 13:43:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:34.084 "name": "raid_bdev1", 00:23:34.084 "uuid": "b5ffa764-2f07-4013-9f10-22fd10f235a3", 00:23:34.084 "strip_size_kb": 0, 00:23:34.084 "state": "online", 00:23:34.084 "raid_level": "raid1", 00:23:34.084 "superblock": true, 00:23:34.084 "num_base_bdevs": 2, 00:23:34.084 "num_base_bdevs_discovered": 1, 00:23:34.084 "num_base_bdevs_operational": 1, 00:23:34.084 "base_bdevs_list": [ 00:23:34.084 { 00:23:34.084 "name": null, 00:23:34.084 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:34.084 "is_configured": false, 00:23:34.084 "data_offset": 0, 00:23:34.084 "data_size": 7936 00:23:34.084 }, 00:23:34.084 { 00:23:34.084 "name": "BaseBdev2", 00:23:34.084 "uuid": "3e829f17-08e0-5386-8882-65b4a751a1f6", 00:23:34.084 "is_configured": true, 00:23:34.084 "data_offset": 256, 00:23:34.084 "data_size": 7936 00:23:34.084 } 00:23:34.084 ] 00:23:34.084 }' 00:23:34.084 13:43:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:34.084 13:43:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:23:34.084 13:43:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:34.084 13:43:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:23:34.084 13:43:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:23:34.084 13:43:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@652 -- # local es=0 00:23:34.084 13:43:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:23:34.084 13:43:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:23:34.084 13:43:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:34.084 13:43:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:23:34.084 13:43:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:34.084 13:43:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:23:34.084 13:43:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:34.084 13:43:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:34.084 [2024-11-20 13:43:33.473636] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:34.084 [2024-11-20 13:43:33.473945] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:23:34.084 [2024-11-20 13:43:33.473974] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:23:34.084 request: 00:23:34.084 { 00:23:34.084 "base_bdev": "BaseBdev1", 00:23:34.084 "raid_bdev": "raid_bdev1", 00:23:34.084 "method": "bdev_raid_add_base_bdev", 00:23:34.084 "req_id": 1 00:23:34.084 } 00:23:34.084 Got JSON-RPC error response 00:23:34.084 response: 00:23:34.084 { 00:23:34.084 "code": -22, 00:23:34.084 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:23:34.084 } 00:23:34.084 13:43:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:23:34.084 13:43:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@655 -- # es=1 00:23:34.084 13:43:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:34.084 13:43:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:34.084 13:43:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:34.084 13:43:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@779 -- # sleep 1 00:23:35.023 13:43:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:35.023 13:43:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:35.023 13:43:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:35.023 13:43:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:35.023 13:43:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:35.023 13:43:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:23:35.023 13:43:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:35.023 13:43:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:35.023 13:43:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:35.023 13:43:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:35.023 13:43:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:35.023 13:43:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:35.023 13:43:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:35.023 13:43:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:35.283 13:43:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:35.283 13:43:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:35.283 "name": "raid_bdev1", 00:23:35.283 "uuid": "b5ffa764-2f07-4013-9f10-22fd10f235a3", 00:23:35.283 "strip_size_kb": 0, 00:23:35.283 "state": "online", 00:23:35.283 "raid_level": "raid1", 00:23:35.283 "superblock": true, 00:23:35.283 "num_base_bdevs": 2, 00:23:35.283 "num_base_bdevs_discovered": 1, 00:23:35.283 "num_base_bdevs_operational": 1, 00:23:35.283 "base_bdevs_list": [ 00:23:35.283 { 00:23:35.283 "name": null, 00:23:35.283 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:35.283 "is_configured": false, 00:23:35.283 "data_offset": 0, 00:23:35.283 "data_size": 7936 00:23:35.283 }, 00:23:35.283 { 00:23:35.283 "name": "BaseBdev2", 00:23:35.283 "uuid": "3e829f17-08e0-5386-8882-65b4a751a1f6", 00:23:35.283 "is_configured": true, 00:23:35.283 "data_offset": 256, 00:23:35.283 "data_size": 7936 00:23:35.283 } 00:23:35.283 ] 00:23:35.283 }' 00:23:35.283 13:43:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:35.283 13:43:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:35.542 13:43:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:35.542 13:43:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:35.542 13:43:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:23:35.542 13:43:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:23:35.542 13:43:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:35.542 13:43:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:35.542 13:43:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:35.542 13:43:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:35.542 13:43:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:35.542 13:43:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:35.542 13:43:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:35.542 "name": "raid_bdev1", 00:23:35.542 "uuid": "b5ffa764-2f07-4013-9f10-22fd10f235a3", 00:23:35.542 "strip_size_kb": 0, 00:23:35.542 "state": "online", 00:23:35.542 "raid_level": "raid1", 00:23:35.542 "superblock": true, 00:23:35.542 "num_base_bdevs": 2, 00:23:35.542 "num_base_bdevs_discovered": 1, 00:23:35.542 "num_base_bdevs_operational": 1, 00:23:35.542 "base_bdevs_list": [ 00:23:35.542 { 00:23:35.542 "name": null, 00:23:35.542 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:35.542 "is_configured": false, 00:23:35.542 "data_offset": 0, 00:23:35.542 "data_size": 7936 00:23:35.542 }, 00:23:35.542 { 00:23:35.542 "name": "BaseBdev2", 00:23:35.542 "uuid": "3e829f17-08e0-5386-8882-65b4a751a1f6", 00:23:35.542 "is_configured": true, 00:23:35.542 "data_offset": 256, 00:23:35.542 "data_size": 7936 00:23:35.542 } 00:23:35.542 ] 00:23:35.542 }' 00:23:35.542 13:43:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:35.542 13:43:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:23:35.542 13:43:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:35.802 13:43:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:23:35.802 13:43:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@784 -- # killprocess 87575 00:23:35.802 13:43:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@954 -- # '[' -z 87575 ']' 00:23:35.802 13:43:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@958 -- # kill -0 87575 00:23:35.802 13:43:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@959 -- # uname 00:23:35.802 13:43:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:35.802 13:43:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87575 00:23:35.802 killing process with pid 87575 00:23:35.802 Received shutdown signal, test time was about 60.000000 seconds 00:23:35.802 00:23:35.802 Latency(us) 00:23:35.802 [2024-11-20T13:43:35.287Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:35.802 [2024-11-20T13:43:35.287Z] =================================================================================================================== 00:23:35.802 [2024-11-20T13:43:35.287Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:35.802 13:43:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:35.802 13:43:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:35.802 13:43:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87575' 00:23:35.802 13:43:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@973 -- # kill 87575 00:23:35.803 [2024-11-20 13:43:35.095891] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:35.803 13:43:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@978 -- # wait 87575 00:23:35.803 [2024-11-20 13:43:35.096021] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:35.803 [2024-11-20 13:43:35.096083] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:35.803 [2024-11-20 13:43:35.096099] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:23:36.062 [2024-11-20 13:43:35.422285] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:37.441 13:43:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@786 -- # return 0 00:23:37.441 00:23:37.441 real 0m19.828s 00:23:37.441 user 0m25.649s 00:23:37.441 sys 0m2.913s 00:23:37.441 13:43:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:37.441 13:43:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:37.441 ************************************ 00:23:37.441 END TEST raid_rebuild_test_sb_md_separate 00:23:37.441 ************************************ 00:23:37.441 13:43:36 bdev_raid -- bdev/bdev_raid.sh@1010 -- # base_malloc_params='-m 32 -i' 00:23:37.441 13:43:36 bdev_raid -- bdev/bdev_raid.sh@1011 -- # run_test raid_state_function_test_sb_md_interleaved raid_state_function_test raid1 2 true 00:23:37.441 13:43:36 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:23:37.441 13:43:36 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:37.441 13:43:36 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:23:37.441 ************************************ 00:23:37.441 START TEST raid_state_function_test_sb_md_interleaved 00:23:37.441 ************************************ 00:23:37.441 13:43:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:23:37.441 13:43:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:23:37.441 13:43:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:23:37.441 13:43:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:23:37.441 13:43:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:23:37.441 13:43:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:23:37.441 13:43:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:23:37.441 13:43:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:23:37.441 13:43:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:23:37.441 13:43:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:23:37.441 13:43:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:23:37.441 13:43:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:23:37.441 13:43:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:23:37.441 13:43:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:23:37.441 13:43:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:23:37.441 13:43:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:23:37.441 13:43:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # local strip_size 00:23:37.441 13:43:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:23:37.441 13:43:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:23:37.441 13:43:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:23:37.441 13:43:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:23:37.441 13:43:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:23:37.441 13:43:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:23:37.441 13:43:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@229 -- # raid_pid=88265 00:23:37.441 Process raid pid: 88265 00:23:37.441 13:43:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:23:37.441 13:43:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 88265' 00:23:37.441 13:43:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@231 -- # waitforlisten 88265 00:23:37.441 13:43:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@835 -- # '[' -z 88265 ']' 00:23:37.441 13:43:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:37.441 13:43:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:37.441 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:37.441 13:43:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:37.441 13:43:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:37.441 13:43:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:37.441 [2024-11-20 13:43:36.734474] Starting SPDK v25.01-pre git sha1 82b85d9ca / DPDK 24.03.0 initialization... 00:23:37.441 [2024-11-20 13:43:36.734610] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:37.441 [2024-11-20 13:43:36.916508] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:37.701 [2024-11-20 13:43:37.029138] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:37.960 [2024-11-20 13:43:37.258402] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:37.961 [2024-11-20 13:43:37.258455] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:38.219 13:43:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:38.219 13:43:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@868 -- # return 0 00:23:38.219 13:43:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:23:38.219 13:43:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:38.219 13:43:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:38.219 [2024-11-20 13:43:37.573257] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:23:38.219 [2024-11-20 13:43:37.573310] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:23:38.219 [2024-11-20 13:43:37.573321] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:23:38.219 [2024-11-20 13:43:37.573334] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:23:38.219 13:43:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:38.219 13:43:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:23:38.219 13:43:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:38.219 13:43:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:38.219 13:43:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:38.219 13:43:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:38.219 13:43:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:23:38.219 13:43:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:38.219 13:43:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:38.219 13:43:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:38.219 13:43:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:38.219 13:43:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:38.219 13:43:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:38.219 13:43:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:38.219 13:43:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:38.219 13:43:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:38.219 13:43:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:38.219 "name": "Existed_Raid", 00:23:38.219 "uuid": "de648f8a-4c39-4bb0-8656-2a48095d8f8e", 00:23:38.219 "strip_size_kb": 0, 00:23:38.219 "state": "configuring", 00:23:38.219 "raid_level": "raid1", 00:23:38.219 "superblock": true, 00:23:38.219 "num_base_bdevs": 2, 00:23:38.219 "num_base_bdevs_discovered": 0, 00:23:38.219 "num_base_bdevs_operational": 2, 00:23:38.219 "base_bdevs_list": [ 00:23:38.219 { 00:23:38.219 "name": "BaseBdev1", 00:23:38.219 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:38.219 "is_configured": false, 00:23:38.219 "data_offset": 0, 00:23:38.219 "data_size": 0 00:23:38.219 }, 00:23:38.219 { 00:23:38.219 "name": "BaseBdev2", 00:23:38.219 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:38.219 "is_configured": false, 00:23:38.219 "data_offset": 0, 00:23:38.219 "data_size": 0 00:23:38.219 } 00:23:38.219 ] 00:23:38.219 }' 00:23:38.219 13:43:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:38.219 13:43:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:38.787 13:43:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:23:38.787 13:43:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:38.787 13:43:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:38.787 [2024-11-20 13:43:37.976640] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:23:38.787 [2024-11-20 13:43:37.976683] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:23:38.787 13:43:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:38.787 13:43:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:23:38.787 13:43:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:38.787 13:43:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:38.787 [2024-11-20 13:43:37.988625] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:23:38.787 [2024-11-20 13:43:37.988670] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:23:38.787 [2024-11-20 13:43:37.988681] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:23:38.787 [2024-11-20 13:43:37.988696] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:23:38.787 13:43:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:38.787 13:43:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1 00:23:38.787 13:43:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:38.787 13:43:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:38.787 [2024-11-20 13:43:38.036988] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:38.787 BaseBdev1 00:23:38.787 13:43:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:38.787 13:43:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:23:38.787 13:43:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:23:38.787 13:43:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:23:38.787 13:43:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@905 -- # local i 00:23:38.787 13:43:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:23:38.787 13:43:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:23:38.787 13:43:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:23:38.787 13:43:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:38.787 13:43:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:38.787 13:43:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:38.787 13:43:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:23:38.787 13:43:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:38.787 13:43:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:38.787 [ 00:23:38.787 { 00:23:38.787 "name": "BaseBdev1", 00:23:38.787 "aliases": [ 00:23:38.787 "7d958b36-aa97-4bec-8c56-b5e0880babae" 00:23:38.787 ], 00:23:38.787 "product_name": "Malloc disk", 00:23:38.787 "block_size": 4128, 00:23:38.787 "num_blocks": 8192, 00:23:38.787 "uuid": "7d958b36-aa97-4bec-8c56-b5e0880babae", 00:23:38.787 "md_size": 32, 00:23:38.787 "md_interleave": true, 00:23:38.787 "dif_type": 0, 00:23:38.787 "assigned_rate_limits": { 00:23:38.787 "rw_ios_per_sec": 0, 00:23:38.787 "rw_mbytes_per_sec": 0, 00:23:38.787 "r_mbytes_per_sec": 0, 00:23:38.787 "w_mbytes_per_sec": 0 00:23:38.787 }, 00:23:38.787 "claimed": true, 00:23:38.787 "claim_type": "exclusive_write", 00:23:38.787 "zoned": false, 00:23:38.787 "supported_io_types": { 00:23:38.787 "read": true, 00:23:38.787 "write": true, 00:23:38.787 "unmap": true, 00:23:38.787 "flush": true, 00:23:38.787 "reset": true, 00:23:38.787 "nvme_admin": false, 00:23:38.787 "nvme_io": false, 00:23:38.787 "nvme_io_md": false, 00:23:38.787 "write_zeroes": true, 00:23:38.787 "zcopy": true, 00:23:38.787 "get_zone_info": false, 00:23:38.787 "zone_management": false, 00:23:38.787 "zone_append": false, 00:23:38.787 "compare": false, 00:23:38.787 "compare_and_write": false, 00:23:38.787 "abort": true, 00:23:38.787 "seek_hole": false, 00:23:38.787 "seek_data": false, 00:23:38.787 "copy": true, 00:23:38.787 "nvme_iov_md": false 00:23:38.787 }, 00:23:38.787 "memory_domains": [ 00:23:38.787 { 00:23:38.787 "dma_device_id": "system", 00:23:38.787 "dma_device_type": 1 00:23:38.787 }, 00:23:38.787 { 00:23:38.787 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:38.787 "dma_device_type": 2 00:23:38.787 } 00:23:38.787 ], 00:23:38.787 "driver_specific": {} 00:23:38.787 } 00:23:38.787 ] 00:23:38.787 13:43:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:38.788 13:43:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@911 -- # return 0 00:23:38.788 13:43:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:23:38.788 13:43:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:38.788 13:43:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:38.788 13:43:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:38.788 13:43:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:38.788 13:43:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:23:38.788 13:43:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:38.788 13:43:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:38.788 13:43:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:38.788 13:43:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:38.788 13:43:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:38.788 13:43:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:38.788 13:43:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:38.788 13:43:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:38.788 13:43:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:38.788 13:43:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:38.788 "name": "Existed_Raid", 00:23:38.788 "uuid": "39d0df23-1b18-46e9-8ee9-a88174e41ff5", 00:23:38.788 "strip_size_kb": 0, 00:23:38.788 "state": "configuring", 00:23:38.788 "raid_level": "raid1", 00:23:38.788 "superblock": true, 00:23:38.788 "num_base_bdevs": 2, 00:23:38.788 "num_base_bdevs_discovered": 1, 00:23:38.788 "num_base_bdevs_operational": 2, 00:23:38.788 "base_bdevs_list": [ 00:23:38.788 { 00:23:38.788 "name": "BaseBdev1", 00:23:38.788 "uuid": "7d958b36-aa97-4bec-8c56-b5e0880babae", 00:23:38.788 "is_configured": true, 00:23:38.788 "data_offset": 256, 00:23:38.788 "data_size": 7936 00:23:38.788 }, 00:23:38.788 { 00:23:38.788 "name": "BaseBdev2", 00:23:38.788 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:38.788 "is_configured": false, 00:23:38.788 "data_offset": 0, 00:23:38.788 "data_size": 0 00:23:38.788 } 00:23:38.788 ] 00:23:38.788 }' 00:23:38.788 13:43:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:38.788 13:43:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:39.075 13:43:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:23:39.075 13:43:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.075 13:43:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:39.075 [2024-11-20 13:43:38.516387] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:23:39.075 [2024-11-20 13:43:38.516445] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:23:39.075 13:43:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.075 13:43:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:23:39.075 13:43:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.075 13:43:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:39.075 [2024-11-20 13:43:38.528425] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:39.075 [2024-11-20 13:43:38.530505] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:23:39.075 [2024-11-20 13:43:38.530549] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:23:39.075 13:43:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.075 13:43:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:23:39.075 13:43:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:23:39.075 13:43:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:23:39.075 13:43:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:39.075 13:43:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:39.075 13:43:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:39.075 13:43:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:39.075 13:43:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:23:39.075 13:43:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:39.075 13:43:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:39.075 13:43:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:39.075 13:43:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:39.075 13:43:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:39.075 13:43:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:39.075 13:43:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.075 13:43:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:39.353 13:43:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.353 13:43:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:39.353 "name": "Existed_Raid", 00:23:39.353 "uuid": "fd735012-e4c7-4dbb-93eb-524b648bb9cd", 00:23:39.353 "strip_size_kb": 0, 00:23:39.353 "state": "configuring", 00:23:39.353 "raid_level": "raid1", 00:23:39.353 "superblock": true, 00:23:39.353 "num_base_bdevs": 2, 00:23:39.353 "num_base_bdevs_discovered": 1, 00:23:39.353 "num_base_bdevs_operational": 2, 00:23:39.353 "base_bdevs_list": [ 00:23:39.353 { 00:23:39.353 "name": "BaseBdev1", 00:23:39.353 "uuid": "7d958b36-aa97-4bec-8c56-b5e0880babae", 00:23:39.353 "is_configured": true, 00:23:39.353 "data_offset": 256, 00:23:39.353 "data_size": 7936 00:23:39.353 }, 00:23:39.353 { 00:23:39.353 "name": "BaseBdev2", 00:23:39.353 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:39.353 "is_configured": false, 00:23:39.353 "data_offset": 0, 00:23:39.354 "data_size": 0 00:23:39.354 } 00:23:39.354 ] 00:23:39.354 }' 00:23:39.354 13:43:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:39.354 13:43:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:39.612 13:43:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2 00:23:39.613 13:43:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.613 13:43:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:39.613 [2024-11-20 13:43:38.980222] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:39.613 [2024-11-20 13:43:38.980464] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:23:39.613 [2024-11-20 13:43:38.980479] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:23:39.613 BaseBdev2 00:23:39.613 [2024-11-20 13:43:38.980559] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:23:39.613 [2024-11-20 13:43:38.980635] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:23:39.613 [2024-11-20 13:43:38.980648] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:23:39.613 [2024-11-20 13:43:38.980704] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:39.613 13:43:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.613 13:43:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:23:39.613 13:43:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:23:39.613 13:43:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:23:39.613 13:43:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@905 -- # local i 00:23:39.613 13:43:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:23:39.613 13:43:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:23:39.613 13:43:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:23:39.613 13:43:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.613 13:43:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:39.613 13:43:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.613 13:43:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:23:39.613 13:43:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.613 13:43:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:39.613 [ 00:23:39.613 { 00:23:39.613 "name": "BaseBdev2", 00:23:39.613 "aliases": [ 00:23:39.613 "da58279c-4b66-40d1-8a2d-90c9d20c098a" 00:23:39.613 ], 00:23:39.613 "product_name": "Malloc disk", 00:23:39.613 "block_size": 4128, 00:23:39.613 "num_blocks": 8192, 00:23:39.613 "uuid": "da58279c-4b66-40d1-8a2d-90c9d20c098a", 00:23:39.613 "md_size": 32, 00:23:39.613 "md_interleave": true, 00:23:39.613 "dif_type": 0, 00:23:39.613 "assigned_rate_limits": { 00:23:39.613 "rw_ios_per_sec": 0, 00:23:39.613 "rw_mbytes_per_sec": 0, 00:23:39.613 "r_mbytes_per_sec": 0, 00:23:39.613 "w_mbytes_per_sec": 0 00:23:39.613 }, 00:23:39.613 "claimed": true, 00:23:39.613 "claim_type": "exclusive_write", 00:23:39.613 "zoned": false, 00:23:39.613 "supported_io_types": { 00:23:39.613 "read": true, 00:23:39.613 "write": true, 00:23:39.613 "unmap": true, 00:23:39.613 "flush": true, 00:23:39.613 "reset": true, 00:23:39.613 "nvme_admin": false, 00:23:39.613 "nvme_io": false, 00:23:39.613 "nvme_io_md": false, 00:23:39.613 "write_zeroes": true, 00:23:39.613 "zcopy": true, 00:23:39.613 "get_zone_info": false, 00:23:39.613 "zone_management": false, 00:23:39.613 "zone_append": false, 00:23:39.613 "compare": false, 00:23:39.613 "compare_and_write": false, 00:23:39.613 "abort": true, 00:23:39.613 "seek_hole": false, 00:23:39.613 "seek_data": false, 00:23:39.613 "copy": true, 00:23:39.613 "nvme_iov_md": false 00:23:39.613 }, 00:23:39.613 "memory_domains": [ 00:23:39.613 { 00:23:39.613 "dma_device_id": "system", 00:23:39.613 "dma_device_type": 1 00:23:39.613 }, 00:23:39.613 { 00:23:39.613 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:39.613 "dma_device_type": 2 00:23:39.613 } 00:23:39.613 ], 00:23:39.613 "driver_specific": {} 00:23:39.613 } 00:23:39.613 ] 00:23:39.613 13:43:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.613 13:43:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@911 -- # return 0 00:23:39.613 13:43:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:23:39.613 13:43:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:23:39.613 13:43:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:23:39.613 13:43:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:39.613 13:43:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:39.613 13:43:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:39.613 13:43:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:39.613 13:43:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:23:39.613 13:43:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:39.613 13:43:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:39.613 13:43:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:39.613 13:43:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:39.613 13:43:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:39.613 13:43:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.613 13:43:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:39.613 13:43:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:39.613 13:43:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.613 13:43:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:39.613 "name": "Existed_Raid", 00:23:39.613 "uuid": "fd735012-e4c7-4dbb-93eb-524b648bb9cd", 00:23:39.613 "strip_size_kb": 0, 00:23:39.613 "state": "online", 00:23:39.613 "raid_level": "raid1", 00:23:39.613 "superblock": true, 00:23:39.613 "num_base_bdevs": 2, 00:23:39.613 "num_base_bdevs_discovered": 2, 00:23:39.613 "num_base_bdevs_operational": 2, 00:23:39.613 "base_bdevs_list": [ 00:23:39.613 { 00:23:39.613 "name": "BaseBdev1", 00:23:39.613 "uuid": "7d958b36-aa97-4bec-8c56-b5e0880babae", 00:23:39.613 "is_configured": true, 00:23:39.613 "data_offset": 256, 00:23:39.613 "data_size": 7936 00:23:39.613 }, 00:23:39.613 { 00:23:39.613 "name": "BaseBdev2", 00:23:39.613 "uuid": "da58279c-4b66-40d1-8a2d-90c9d20c098a", 00:23:39.613 "is_configured": true, 00:23:39.613 "data_offset": 256, 00:23:39.613 "data_size": 7936 00:23:39.613 } 00:23:39.613 ] 00:23:39.613 }' 00:23:39.613 13:43:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:39.613 13:43:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:40.185 13:43:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:23:40.185 13:43:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:23:40.185 13:43:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:23:40.185 13:43:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:23:40.185 13:43:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:23:40.185 13:43:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:23:40.185 13:43:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:23:40.185 13:43:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:40.185 13:43:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:23:40.185 13:43:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:40.185 [2024-11-20 13:43:39.400377] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:40.185 13:43:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:40.185 13:43:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:40.185 "name": "Existed_Raid", 00:23:40.185 "aliases": [ 00:23:40.185 "fd735012-e4c7-4dbb-93eb-524b648bb9cd" 00:23:40.185 ], 00:23:40.185 "product_name": "Raid Volume", 00:23:40.185 "block_size": 4128, 00:23:40.185 "num_blocks": 7936, 00:23:40.185 "uuid": "fd735012-e4c7-4dbb-93eb-524b648bb9cd", 00:23:40.185 "md_size": 32, 00:23:40.185 "md_interleave": true, 00:23:40.185 "dif_type": 0, 00:23:40.185 "assigned_rate_limits": { 00:23:40.185 "rw_ios_per_sec": 0, 00:23:40.185 "rw_mbytes_per_sec": 0, 00:23:40.185 "r_mbytes_per_sec": 0, 00:23:40.185 "w_mbytes_per_sec": 0 00:23:40.185 }, 00:23:40.185 "claimed": false, 00:23:40.185 "zoned": false, 00:23:40.185 "supported_io_types": { 00:23:40.185 "read": true, 00:23:40.185 "write": true, 00:23:40.185 "unmap": false, 00:23:40.185 "flush": false, 00:23:40.185 "reset": true, 00:23:40.185 "nvme_admin": false, 00:23:40.185 "nvme_io": false, 00:23:40.185 "nvme_io_md": false, 00:23:40.185 "write_zeroes": true, 00:23:40.185 "zcopy": false, 00:23:40.185 "get_zone_info": false, 00:23:40.185 "zone_management": false, 00:23:40.185 "zone_append": false, 00:23:40.185 "compare": false, 00:23:40.185 "compare_and_write": false, 00:23:40.185 "abort": false, 00:23:40.185 "seek_hole": false, 00:23:40.185 "seek_data": false, 00:23:40.185 "copy": false, 00:23:40.185 "nvme_iov_md": false 00:23:40.185 }, 00:23:40.185 "memory_domains": [ 00:23:40.185 { 00:23:40.185 "dma_device_id": "system", 00:23:40.185 "dma_device_type": 1 00:23:40.185 }, 00:23:40.185 { 00:23:40.185 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:40.185 "dma_device_type": 2 00:23:40.185 }, 00:23:40.185 { 00:23:40.185 "dma_device_id": "system", 00:23:40.185 "dma_device_type": 1 00:23:40.185 }, 00:23:40.185 { 00:23:40.185 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:40.185 "dma_device_type": 2 00:23:40.185 } 00:23:40.185 ], 00:23:40.185 "driver_specific": { 00:23:40.185 "raid": { 00:23:40.185 "uuid": "fd735012-e4c7-4dbb-93eb-524b648bb9cd", 00:23:40.185 "strip_size_kb": 0, 00:23:40.185 "state": "online", 00:23:40.185 "raid_level": "raid1", 00:23:40.185 "superblock": true, 00:23:40.185 "num_base_bdevs": 2, 00:23:40.185 "num_base_bdevs_discovered": 2, 00:23:40.185 "num_base_bdevs_operational": 2, 00:23:40.185 "base_bdevs_list": [ 00:23:40.185 { 00:23:40.185 "name": "BaseBdev1", 00:23:40.185 "uuid": "7d958b36-aa97-4bec-8c56-b5e0880babae", 00:23:40.185 "is_configured": true, 00:23:40.185 "data_offset": 256, 00:23:40.185 "data_size": 7936 00:23:40.185 }, 00:23:40.185 { 00:23:40.185 "name": "BaseBdev2", 00:23:40.185 "uuid": "da58279c-4b66-40d1-8a2d-90c9d20c098a", 00:23:40.185 "is_configured": true, 00:23:40.185 "data_offset": 256, 00:23:40.185 "data_size": 7936 00:23:40.185 } 00:23:40.185 ] 00:23:40.185 } 00:23:40.185 } 00:23:40.185 }' 00:23:40.185 13:43:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:23:40.185 13:43:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:23:40.185 BaseBdev2' 00:23:40.185 13:43:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:40.185 13:43:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:23:40.185 13:43:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:40.185 13:43:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:23:40.185 13:43:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:40.185 13:43:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:40.185 13:43:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:40.185 13:43:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:40.185 13:43:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:23:40.185 13:43:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:23:40.185 13:43:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:40.185 13:43:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:40.185 13:43:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:23:40.185 13:43:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:40.185 13:43:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:40.185 13:43:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:40.185 13:43:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:23:40.185 13:43:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:23:40.185 13:43:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:23:40.185 13:43:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:40.185 13:43:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:40.186 [2024-11-20 13:43:39.611814] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:23:40.444 13:43:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:40.444 13:43:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@260 -- # local expected_state 00:23:40.444 13:43:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:23:40.444 13:43:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 00:23:40.445 13:43:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 00:23:40.445 13:43:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:23:40.445 13:43:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:23:40.445 13:43:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:40.445 13:43:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:40.445 13:43:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:40.445 13:43:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:40.445 13:43:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:23:40.445 13:43:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:40.445 13:43:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:40.445 13:43:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:40.445 13:43:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:40.445 13:43:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:40.445 13:43:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:40.445 13:43:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:40.445 13:43:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:40.445 13:43:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:40.445 13:43:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:40.445 "name": "Existed_Raid", 00:23:40.445 "uuid": "fd735012-e4c7-4dbb-93eb-524b648bb9cd", 00:23:40.445 "strip_size_kb": 0, 00:23:40.445 "state": "online", 00:23:40.445 "raid_level": "raid1", 00:23:40.445 "superblock": true, 00:23:40.445 "num_base_bdevs": 2, 00:23:40.445 "num_base_bdevs_discovered": 1, 00:23:40.445 "num_base_bdevs_operational": 1, 00:23:40.445 "base_bdevs_list": [ 00:23:40.445 { 00:23:40.445 "name": null, 00:23:40.445 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:40.445 "is_configured": false, 00:23:40.445 "data_offset": 0, 00:23:40.445 "data_size": 7936 00:23:40.445 }, 00:23:40.445 { 00:23:40.445 "name": "BaseBdev2", 00:23:40.445 "uuid": "da58279c-4b66-40d1-8a2d-90c9d20c098a", 00:23:40.445 "is_configured": true, 00:23:40.445 "data_offset": 256, 00:23:40.445 "data_size": 7936 00:23:40.445 } 00:23:40.445 ] 00:23:40.445 }' 00:23:40.445 13:43:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:40.445 13:43:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:40.703 13:43:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:23:40.703 13:43:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:23:40.703 13:43:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:23:40.703 13:43:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:40.703 13:43:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:40.703 13:43:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:40.703 13:43:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:40.703 13:43:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:23:40.703 13:43:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:23:40.703 13:43:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:23:40.703 13:43:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:40.703 13:43:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:40.703 [2024-11-20 13:43:40.126498] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:23:40.703 [2024-11-20 13:43:40.126614] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:40.962 [2024-11-20 13:43:40.221725] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:40.962 [2024-11-20 13:43:40.221782] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:40.962 [2024-11-20 13:43:40.221798] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:23:40.962 13:43:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:40.962 13:43:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:23:40.963 13:43:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:23:40.963 13:43:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:40.963 13:43:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:40.963 13:43:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:23:40.963 13:43:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:40.963 13:43:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:40.963 13:43:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:23:40.963 13:43:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:23:40.963 13:43:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:23:40.963 13:43:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@326 -- # killprocess 88265 00:23:40.963 13:43:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # '[' -z 88265 ']' 00:23:40.963 13:43:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # kill -0 88265 00:23:40.963 13:43:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # uname 00:23:40.963 13:43:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:40.963 13:43:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88265 00:23:40.963 13:43:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:40.963 13:43:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:40.963 killing process with pid 88265 00:23:40.963 13:43:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88265' 00:23:40.963 13:43:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@973 -- # kill 88265 00:23:40.963 [2024-11-20 13:43:40.312202] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:40.963 13:43:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@978 -- # wait 88265 00:23:40.963 [2024-11-20 13:43:40.328851] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:42.340 13:43:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@328 -- # return 0 00:23:42.340 00:23:42.340 real 0m4.840s 00:23:42.340 user 0m6.851s 00:23:42.340 sys 0m0.916s 00:23:42.340 13:43:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:42.340 13:43:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:42.340 ************************************ 00:23:42.340 END TEST raid_state_function_test_sb_md_interleaved 00:23:42.340 ************************************ 00:23:42.340 13:43:41 bdev_raid -- bdev/bdev_raid.sh@1012 -- # run_test raid_superblock_test_md_interleaved raid_superblock_test raid1 2 00:23:42.340 13:43:41 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:23:42.340 13:43:41 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:42.340 13:43:41 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:23:42.340 ************************************ 00:23:42.340 START TEST raid_superblock_test_md_interleaved 00:23:42.340 ************************************ 00:23:42.340 13:43:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:23:42.340 13:43:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:23:42.340 13:43:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:23:42.340 13:43:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:23:42.340 13:43:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:23:42.340 13:43:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:23:42.340 13:43:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:23:42.340 13:43:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:23:42.340 13:43:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:23:42.340 13:43:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:23:42.340 13:43:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@399 -- # local strip_size 00:23:42.340 13:43:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:23:42.340 13:43:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:23:42.340 13:43:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:23:42.340 13:43:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:23:42.340 13:43:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:23:42.340 13:43:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@412 -- # raid_pid=88512 00:23:42.340 13:43:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@413 -- # waitforlisten 88512 00:23:42.340 13:43:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:23:42.340 13:43:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@835 -- # '[' -z 88512 ']' 00:23:42.340 13:43:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:42.340 13:43:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:42.340 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:42.340 13:43:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:42.340 13:43:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:42.340 13:43:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:42.340 [2024-11-20 13:43:41.643511] Starting SPDK v25.01-pre git sha1 82b85d9ca / DPDK 24.03.0 initialization... 00:23:42.340 [2024-11-20 13:43:41.643646] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88512 ] 00:23:42.599 [2024-11-20 13:43:41.825124] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:42.599 [2024-11-20 13:43:41.948064] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:42.858 [2024-11-20 13:43:42.158311] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:42.858 [2024-11-20 13:43:42.158359] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:43.118 13:43:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:43.118 13:43:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@868 -- # return 0 00:23:43.118 13:43:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:23:43.118 13:43:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:23:43.118 13:43:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:23:43.118 13:43:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:23:43.118 13:43:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:23:43.118 13:43:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:23:43.118 13:43:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:23:43.118 13:43:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:23:43.118 13:43:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc1 00:23:43.118 13:43:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:43.118 13:43:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:43.118 malloc1 00:23:43.118 13:43:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:43.118 13:43:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:23:43.118 13:43:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:43.118 13:43:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:43.118 [2024-11-20 13:43:42.558490] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:23:43.118 [2024-11-20 13:43:42.558559] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:43.118 [2024-11-20 13:43:42.558584] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:23:43.118 [2024-11-20 13:43:42.558596] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:43.118 [2024-11-20 13:43:42.560717] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:43.118 [2024-11-20 13:43:42.560759] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:23:43.118 pt1 00:23:43.118 13:43:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:43.118 13:43:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:23:43.118 13:43:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:23:43.118 13:43:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:23:43.118 13:43:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:23:43.118 13:43:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:23:43.118 13:43:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:23:43.118 13:43:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:23:43.118 13:43:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:23:43.118 13:43:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc2 00:23:43.118 13:43:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:43.118 13:43:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:43.376 malloc2 00:23:43.376 13:43:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:43.376 13:43:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:23:43.376 13:43:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:43.376 13:43:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:43.376 [2024-11-20 13:43:42.614514] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:23:43.376 [2024-11-20 13:43:42.614587] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:43.376 [2024-11-20 13:43:42.614612] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:23:43.376 [2024-11-20 13:43:42.614624] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:43.376 [2024-11-20 13:43:42.616702] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:43.376 [2024-11-20 13:43:42.616739] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:23:43.376 pt2 00:23:43.377 13:43:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:43.377 13:43:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:23:43.377 13:43:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:23:43.377 13:43:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:23:43.377 13:43:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:43.377 13:43:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:43.377 [2024-11-20 13:43:42.626527] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:23:43.377 [2024-11-20 13:43:42.628532] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:23:43.377 [2024-11-20 13:43:42.628723] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:23:43.377 [2024-11-20 13:43:42.628737] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:23:43.377 [2024-11-20 13:43:42.628822] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:23:43.377 [2024-11-20 13:43:42.628892] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:23:43.377 [2024-11-20 13:43:42.628905] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:23:43.377 [2024-11-20 13:43:42.628974] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:43.377 13:43:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:43.377 13:43:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:23:43.377 13:43:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:43.377 13:43:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:43.377 13:43:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:43.377 13:43:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:43.377 13:43:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:23:43.377 13:43:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:43.377 13:43:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:43.377 13:43:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:43.377 13:43:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:43.377 13:43:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:43.377 13:43:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:43.377 13:43:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:43.377 13:43:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:43.377 13:43:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:43.377 13:43:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:43.377 "name": "raid_bdev1", 00:23:43.377 "uuid": "7775a7ef-013a-4097-a70e-316f6a24007b", 00:23:43.377 "strip_size_kb": 0, 00:23:43.377 "state": "online", 00:23:43.377 "raid_level": "raid1", 00:23:43.377 "superblock": true, 00:23:43.377 "num_base_bdevs": 2, 00:23:43.377 "num_base_bdevs_discovered": 2, 00:23:43.377 "num_base_bdevs_operational": 2, 00:23:43.377 "base_bdevs_list": [ 00:23:43.377 { 00:23:43.377 "name": "pt1", 00:23:43.377 "uuid": "00000000-0000-0000-0000-000000000001", 00:23:43.377 "is_configured": true, 00:23:43.377 "data_offset": 256, 00:23:43.377 "data_size": 7936 00:23:43.377 }, 00:23:43.377 { 00:23:43.377 "name": "pt2", 00:23:43.377 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:43.377 "is_configured": true, 00:23:43.377 "data_offset": 256, 00:23:43.377 "data_size": 7936 00:23:43.377 } 00:23:43.377 ] 00:23:43.377 }' 00:23:43.377 13:43:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:43.377 13:43:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:43.650 13:43:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:23:43.650 13:43:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:23:43.650 13:43:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:23:43.650 13:43:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:23:43.650 13:43:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:23:43.650 13:43:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:23:43.650 13:43:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:23:43.650 13:43:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:43.650 13:43:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:43.650 13:43:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:23:43.650 [2024-11-20 13:43:43.022665] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:43.650 13:43:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:43.650 13:43:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:43.650 "name": "raid_bdev1", 00:23:43.650 "aliases": [ 00:23:43.650 "7775a7ef-013a-4097-a70e-316f6a24007b" 00:23:43.650 ], 00:23:43.650 "product_name": "Raid Volume", 00:23:43.650 "block_size": 4128, 00:23:43.650 "num_blocks": 7936, 00:23:43.650 "uuid": "7775a7ef-013a-4097-a70e-316f6a24007b", 00:23:43.650 "md_size": 32, 00:23:43.650 "md_interleave": true, 00:23:43.650 "dif_type": 0, 00:23:43.650 "assigned_rate_limits": { 00:23:43.650 "rw_ios_per_sec": 0, 00:23:43.650 "rw_mbytes_per_sec": 0, 00:23:43.650 "r_mbytes_per_sec": 0, 00:23:43.650 "w_mbytes_per_sec": 0 00:23:43.650 }, 00:23:43.650 "claimed": false, 00:23:43.650 "zoned": false, 00:23:43.650 "supported_io_types": { 00:23:43.650 "read": true, 00:23:43.650 "write": true, 00:23:43.650 "unmap": false, 00:23:43.650 "flush": false, 00:23:43.650 "reset": true, 00:23:43.650 "nvme_admin": false, 00:23:43.650 "nvme_io": false, 00:23:43.650 "nvme_io_md": false, 00:23:43.650 "write_zeroes": true, 00:23:43.650 "zcopy": false, 00:23:43.650 "get_zone_info": false, 00:23:43.650 "zone_management": false, 00:23:43.650 "zone_append": false, 00:23:43.650 "compare": false, 00:23:43.650 "compare_and_write": false, 00:23:43.650 "abort": false, 00:23:43.650 "seek_hole": false, 00:23:43.650 "seek_data": false, 00:23:43.650 "copy": false, 00:23:43.650 "nvme_iov_md": false 00:23:43.650 }, 00:23:43.650 "memory_domains": [ 00:23:43.650 { 00:23:43.650 "dma_device_id": "system", 00:23:43.650 "dma_device_type": 1 00:23:43.650 }, 00:23:43.650 { 00:23:43.650 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:43.650 "dma_device_type": 2 00:23:43.650 }, 00:23:43.650 { 00:23:43.650 "dma_device_id": "system", 00:23:43.650 "dma_device_type": 1 00:23:43.650 }, 00:23:43.650 { 00:23:43.650 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:43.650 "dma_device_type": 2 00:23:43.650 } 00:23:43.650 ], 00:23:43.650 "driver_specific": { 00:23:43.650 "raid": { 00:23:43.650 "uuid": "7775a7ef-013a-4097-a70e-316f6a24007b", 00:23:43.650 "strip_size_kb": 0, 00:23:43.650 "state": "online", 00:23:43.650 "raid_level": "raid1", 00:23:43.650 "superblock": true, 00:23:43.650 "num_base_bdevs": 2, 00:23:43.650 "num_base_bdevs_discovered": 2, 00:23:43.650 "num_base_bdevs_operational": 2, 00:23:43.650 "base_bdevs_list": [ 00:23:43.650 { 00:23:43.650 "name": "pt1", 00:23:43.650 "uuid": "00000000-0000-0000-0000-000000000001", 00:23:43.650 "is_configured": true, 00:23:43.650 "data_offset": 256, 00:23:43.650 "data_size": 7936 00:23:43.650 }, 00:23:43.650 { 00:23:43.650 "name": "pt2", 00:23:43.650 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:43.650 "is_configured": true, 00:23:43.650 "data_offset": 256, 00:23:43.650 "data_size": 7936 00:23:43.650 } 00:23:43.650 ] 00:23:43.650 } 00:23:43.650 } 00:23:43.650 }' 00:23:43.650 13:43:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:23:43.650 13:43:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:23:43.650 pt2' 00:23:43.650 13:43:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:43.940 13:43:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:23:43.940 13:43:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:43.940 13:43:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:23:43.940 13:43:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:43.940 13:43:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:43.940 13:43:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:43.940 13:43:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:43.940 13:43:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:23:43.940 13:43:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:23:43.940 13:43:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:43.940 13:43:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:43.940 13:43:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:23:43.940 13:43:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:43.940 13:43:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:43.940 13:43:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:43.940 13:43:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:23:43.940 13:43:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:23:43.940 13:43:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:23:43.940 13:43:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:43.940 13:43:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:43.940 13:43:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:23:43.940 [2024-11-20 13:43:43.234493] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:43.940 13:43:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:43.940 13:43:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=7775a7ef-013a-4097-a70e-316f6a24007b 00:23:43.940 13:43:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@436 -- # '[' -z 7775a7ef-013a-4097-a70e-316f6a24007b ']' 00:23:43.940 13:43:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:23:43.940 13:43:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:43.940 13:43:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:43.940 [2024-11-20 13:43:43.278194] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:43.940 [2024-11-20 13:43:43.278232] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:43.940 [2024-11-20 13:43:43.278341] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:43.940 [2024-11-20 13:43:43.278401] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:43.940 [2024-11-20 13:43:43.278415] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:23:43.940 13:43:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:43.940 13:43:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:23:43.940 13:43:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:43.940 13:43:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:43.940 13:43:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:43.940 13:43:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:43.940 13:43:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:23:43.940 13:43:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:23:43.940 13:43:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:23:43.940 13:43:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:23:43.941 13:43:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:43.941 13:43:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:43.941 13:43:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:43.941 13:43:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:23:43.941 13:43:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:23:43.941 13:43:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:43.941 13:43:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:43.941 13:43:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:43.941 13:43:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:23:43.941 13:43:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:23:43.941 13:43:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:43.941 13:43:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:43.941 13:43:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:43.941 13:43:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:23:43.941 13:43:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:23:43.941 13:43:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@652 -- # local es=0 00:23:43.941 13:43:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:23:43.941 13:43:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:23:43.941 13:43:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:43.941 13:43:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:23:43.941 13:43:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:43.941 13:43:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:23:43.941 13:43:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:43.941 13:43:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:43.941 [2024-11-20 13:43:43.394328] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:23:43.941 [2024-11-20 13:43:43.396474] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:23:43.941 [2024-11-20 13:43:43.396554] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:23:43.941 [2024-11-20 13:43:43.396613] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:23:43.941 [2024-11-20 13:43:43.396631] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:43.941 [2024-11-20 13:43:43.396644] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:23:43.941 request: 00:23:43.941 { 00:23:43.941 "name": "raid_bdev1", 00:23:43.941 "raid_level": "raid1", 00:23:43.941 "base_bdevs": [ 00:23:43.941 "malloc1", 00:23:43.941 "malloc2" 00:23:43.941 ], 00:23:43.941 "superblock": false, 00:23:43.941 "method": "bdev_raid_create", 00:23:43.941 "req_id": 1 00:23:43.941 } 00:23:43.941 Got JSON-RPC error response 00:23:43.941 response: 00:23:43.941 { 00:23:43.941 "code": -17, 00:23:43.941 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:23:43.941 } 00:23:43.941 13:43:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:23:43.941 13:43:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@655 -- # es=1 00:23:43.941 13:43:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:43.941 13:43:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:43.941 13:43:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:43.941 13:43:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:23:43.941 13:43:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:43.941 13:43:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:43.941 13:43:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:43.941 13:43:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:44.199 13:43:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:23:44.199 13:43:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:23:44.199 13:43:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:23:44.199 13:43:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:44.199 13:43:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:44.199 [2024-11-20 13:43:43.450213] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:23:44.199 [2024-11-20 13:43:43.450311] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:44.199 [2024-11-20 13:43:43.450332] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:23:44.199 [2024-11-20 13:43:43.450347] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:44.199 [2024-11-20 13:43:43.452570] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:44.199 [2024-11-20 13:43:43.452614] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:23:44.199 [2024-11-20 13:43:43.452674] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:23:44.199 [2024-11-20 13:43:43.452744] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:23:44.199 pt1 00:23:44.199 13:43:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:44.199 13:43:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:23:44.199 13:43:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:44.199 13:43:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:44.199 13:43:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:44.199 13:43:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:44.199 13:43:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:23:44.199 13:43:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:44.199 13:43:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:44.199 13:43:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:44.199 13:43:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:44.199 13:43:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:44.199 13:43:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:44.199 13:43:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:44.199 13:43:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:44.199 13:43:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:44.199 13:43:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:44.199 "name": "raid_bdev1", 00:23:44.199 "uuid": "7775a7ef-013a-4097-a70e-316f6a24007b", 00:23:44.199 "strip_size_kb": 0, 00:23:44.199 "state": "configuring", 00:23:44.199 "raid_level": "raid1", 00:23:44.199 "superblock": true, 00:23:44.199 "num_base_bdevs": 2, 00:23:44.199 "num_base_bdevs_discovered": 1, 00:23:44.199 "num_base_bdevs_operational": 2, 00:23:44.199 "base_bdevs_list": [ 00:23:44.199 { 00:23:44.199 "name": "pt1", 00:23:44.199 "uuid": "00000000-0000-0000-0000-000000000001", 00:23:44.199 "is_configured": true, 00:23:44.199 "data_offset": 256, 00:23:44.199 "data_size": 7936 00:23:44.199 }, 00:23:44.199 { 00:23:44.199 "name": null, 00:23:44.199 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:44.199 "is_configured": false, 00:23:44.199 "data_offset": 256, 00:23:44.199 "data_size": 7936 00:23:44.199 } 00:23:44.199 ] 00:23:44.199 }' 00:23:44.199 13:43:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:44.199 13:43:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:44.458 13:43:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:23:44.458 13:43:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:23:44.458 13:43:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:23:44.458 13:43:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:23:44.458 13:43:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:44.458 13:43:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:44.458 [2024-11-20 13:43:43.917618] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:23:44.458 [2024-11-20 13:43:43.917701] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:44.458 [2024-11-20 13:43:43.917724] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:23:44.458 [2024-11-20 13:43:43.917739] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:44.458 [2024-11-20 13:43:43.917910] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:44.458 [2024-11-20 13:43:43.917929] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:23:44.458 [2024-11-20 13:43:43.917983] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:23:44.458 [2024-11-20 13:43:43.918005] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:23:44.458 [2024-11-20 13:43:43.918100] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:23:44.458 [2024-11-20 13:43:43.918114] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:23:44.458 [2024-11-20 13:43:43.918186] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:23:44.458 [2024-11-20 13:43:43.918247] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:23:44.458 [2024-11-20 13:43:43.918256] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:23:44.458 [2024-11-20 13:43:43.918345] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:44.458 pt2 00:23:44.458 13:43:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:44.458 13:43:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:23:44.458 13:43:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:23:44.458 13:43:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:23:44.458 13:43:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:44.458 13:43:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:44.458 13:43:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:44.459 13:43:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:44.459 13:43:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:23:44.459 13:43:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:44.459 13:43:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:44.459 13:43:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:44.459 13:43:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:44.459 13:43:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:44.459 13:43:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:44.459 13:43:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:44.459 13:43:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:44.716 13:43:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:44.716 13:43:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:44.716 "name": "raid_bdev1", 00:23:44.716 "uuid": "7775a7ef-013a-4097-a70e-316f6a24007b", 00:23:44.716 "strip_size_kb": 0, 00:23:44.716 "state": "online", 00:23:44.716 "raid_level": "raid1", 00:23:44.716 "superblock": true, 00:23:44.716 "num_base_bdevs": 2, 00:23:44.716 "num_base_bdevs_discovered": 2, 00:23:44.716 "num_base_bdevs_operational": 2, 00:23:44.716 "base_bdevs_list": [ 00:23:44.716 { 00:23:44.716 "name": "pt1", 00:23:44.716 "uuid": "00000000-0000-0000-0000-000000000001", 00:23:44.716 "is_configured": true, 00:23:44.716 "data_offset": 256, 00:23:44.716 "data_size": 7936 00:23:44.716 }, 00:23:44.716 { 00:23:44.716 "name": "pt2", 00:23:44.716 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:44.716 "is_configured": true, 00:23:44.716 "data_offset": 256, 00:23:44.716 "data_size": 7936 00:23:44.716 } 00:23:44.716 ] 00:23:44.716 }' 00:23:44.716 13:43:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:44.716 13:43:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:44.974 13:43:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:23:44.974 13:43:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:23:44.974 13:43:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:23:44.974 13:43:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:23:44.974 13:43:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:23:44.974 13:43:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:23:44.974 13:43:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:23:44.974 13:43:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:23:44.974 13:43:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:44.974 13:43:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:44.974 [2024-11-20 13:43:44.369462] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:44.974 13:43:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:44.974 13:43:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:44.974 "name": "raid_bdev1", 00:23:44.974 "aliases": [ 00:23:44.974 "7775a7ef-013a-4097-a70e-316f6a24007b" 00:23:44.974 ], 00:23:44.974 "product_name": "Raid Volume", 00:23:44.974 "block_size": 4128, 00:23:44.974 "num_blocks": 7936, 00:23:44.974 "uuid": "7775a7ef-013a-4097-a70e-316f6a24007b", 00:23:44.974 "md_size": 32, 00:23:44.974 "md_interleave": true, 00:23:44.974 "dif_type": 0, 00:23:44.974 "assigned_rate_limits": { 00:23:44.974 "rw_ios_per_sec": 0, 00:23:44.974 "rw_mbytes_per_sec": 0, 00:23:44.974 "r_mbytes_per_sec": 0, 00:23:44.974 "w_mbytes_per_sec": 0 00:23:44.974 }, 00:23:44.974 "claimed": false, 00:23:44.974 "zoned": false, 00:23:44.974 "supported_io_types": { 00:23:44.974 "read": true, 00:23:44.974 "write": true, 00:23:44.974 "unmap": false, 00:23:44.974 "flush": false, 00:23:44.974 "reset": true, 00:23:44.974 "nvme_admin": false, 00:23:44.974 "nvme_io": false, 00:23:44.974 "nvme_io_md": false, 00:23:44.974 "write_zeroes": true, 00:23:44.974 "zcopy": false, 00:23:44.974 "get_zone_info": false, 00:23:44.974 "zone_management": false, 00:23:44.974 "zone_append": false, 00:23:44.974 "compare": false, 00:23:44.974 "compare_and_write": false, 00:23:44.974 "abort": false, 00:23:44.974 "seek_hole": false, 00:23:44.974 "seek_data": false, 00:23:44.974 "copy": false, 00:23:44.974 "nvme_iov_md": false 00:23:44.974 }, 00:23:44.974 "memory_domains": [ 00:23:44.974 { 00:23:44.974 "dma_device_id": "system", 00:23:44.974 "dma_device_type": 1 00:23:44.974 }, 00:23:44.974 { 00:23:44.974 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:44.974 "dma_device_type": 2 00:23:44.974 }, 00:23:44.974 { 00:23:44.974 "dma_device_id": "system", 00:23:44.974 "dma_device_type": 1 00:23:44.974 }, 00:23:44.974 { 00:23:44.974 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:44.974 "dma_device_type": 2 00:23:44.974 } 00:23:44.974 ], 00:23:44.974 "driver_specific": { 00:23:44.974 "raid": { 00:23:44.974 "uuid": "7775a7ef-013a-4097-a70e-316f6a24007b", 00:23:44.974 "strip_size_kb": 0, 00:23:44.974 "state": "online", 00:23:44.974 "raid_level": "raid1", 00:23:44.974 "superblock": true, 00:23:44.974 "num_base_bdevs": 2, 00:23:44.974 "num_base_bdevs_discovered": 2, 00:23:44.974 "num_base_bdevs_operational": 2, 00:23:44.974 "base_bdevs_list": [ 00:23:44.974 { 00:23:44.974 "name": "pt1", 00:23:44.974 "uuid": "00000000-0000-0000-0000-000000000001", 00:23:44.974 "is_configured": true, 00:23:44.974 "data_offset": 256, 00:23:44.974 "data_size": 7936 00:23:44.974 }, 00:23:44.974 { 00:23:44.974 "name": "pt2", 00:23:44.974 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:44.974 "is_configured": true, 00:23:44.974 "data_offset": 256, 00:23:44.974 "data_size": 7936 00:23:44.974 } 00:23:44.974 ] 00:23:44.974 } 00:23:44.974 } 00:23:44.974 }' 00:23:44.974 13:43:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:23:44.974 13:43:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:23:44.974 pt2' 00:23:44.974 13:43:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:45.233 13:43:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:23:45.233 13:43:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:45.233 13:43:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:23:45.233 13:43:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:45.233 13:43:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:45.233 13:43:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:45.233 13:43:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:45.233 13:43:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:23:45.233 13:43:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:23:45.233 13:43:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:45.233 13:43:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:45.233 13:43:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:23:45.233 13:43:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:45.233 13:43:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:45.233 13:43:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:45.233 13:43:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:23:45.233 13:43:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:23:45.233 13:43:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:23:45.233 13:43:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:45.233 13:43:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:45.233 13:43:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:23:45.233 [2024-11-20 13:43:44.597164] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:45.233 13:43:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:45.233 13:43:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # '[' 7775a7ef-013a-4097-a70e-316f6a24007b '!=' 7775a7ef-013a-4097-a70e-316f6a24007b ']' 00:23:45.233 13:43:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:23:45.233 13:43:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 00:23:45.233 13:43:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 00:23:45.233 13:43:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:23:45.233 13:43:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:45.233 13:43:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:45.233 [2024-11-20 13:43:44.640855] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:23:45.233 13:43:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:45.233 13:43:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:45.233 13:43:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:45.233 13:43:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:45.233 13:43:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:45.233 13:43:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:45.233 13:43:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:23:45.233 13:43:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:45.233 13:43:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:45.233 13:43:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:45.233 13:43:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:45.233 13:43:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:45.233 13:43:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:45.233 13:43:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:45.233 13:43:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:45.233 13:43:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:45.233 13:43:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:45.233 "name": "raid_bdev1", 00:23:45.233 "uuid": "7775a7ef-013a-4097-a70e-316f6a24007b", 00:23:45.233 "strip_size_kb": 0, 00:23:45.233 "state": "online", 00:23:45.233 "raid_level": "raid1", 00:23:45.233 "superblock": true, 00:23:45.233 "num_base_bdevs": 2, 00:23:45.233 "num_base_bdevs_discovered": 1, 00:23:45.233 "num_base_bdevs_operational": 1, 00:23:45.233 "base_bdevs_list": [ 00:23:45.233 { 00:23:45.233 "name": null, 00:23:45.233 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:45.233 "is_configured": false, 00:23:45.233 "data_offset": 0, 00:23:45.233 "data_size": 7936 00:23:45.233 }, 00:23:45.233 { 00:23:45.233 "name": "pt2", 00:23:45.233 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:45.233 "is_configured": true, 00:23:45.233 "data_offset": 256, 00:23:45.233 "data_size": 7936 00:23:45.233 } 00:23:45.233 ] 00:23:45.233 }' 00:23:45.233 13:43:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:45.233 13:43:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:45.799 13:43:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:23:45.799 13:43:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:45.799 13:43:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:45.799 [2024-11-20 13:43:45.084203] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:45.799 [2024-11-20 13:43:45.084241] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:45.799 [2024-11-20 13:43:45.084321] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:45.799 [2024-11-20 13:43:45.084370] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:45.799 [2024-11-20 13:43:45.084385] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:23:45.799 13:43:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:45.799 13:43:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:23:45.799 13:43:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:45.799 13:43:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:45.799 13:43:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:45.799 13:43:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:45.799 13:43:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:23:45.799 13:43:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:23:45.799 13:43:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:23:45.799 13:43:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:23:45.799 13:43:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:23:45.799 13:43:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:45.799 13:43:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:45.799 13:43:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:45.799 13:43:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:23:45.799 13:43:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:23:45.799 13:43:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:23:45.799 13:43:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:23:45.799 13:43:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@519 -- # i=1 00:23:45.799 13:43:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:23:45.799 13:43:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:45.799 13:43:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:45.799 [2024-11-20 13:43:45.136193] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:23:45.799 [2024-11-20 13:43:45.136269] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:45.799 [2024-11-20 13:43:45.136289] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:23:45.799 [2024-11-20 13:43:45.136303] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:45.799 [2024-11-20 13:43:45.138495] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:45.799 [2024-11-20 13:43:45.138544] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:23:45.799 [2024-11-20 13:43:45.138625] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:23:45.800 [2024-11-20 13:43:45.138679] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:23:45.800 [2024-11-20 13:43:45.138747] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:23:45.800 [2024-11-20 13:43:45.138763] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:23:45.800 [2024-11-20 13:43:45.138856] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:23:45.800 [2024-11-20 13:43:45.138919] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:23:45.800 [2024-11-20 13:43:45.138929] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:23:45.800 [2024-11-20 13:43:45.138998] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:45.800 pt2 00:23:45.800 13:43:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:45.800 13:43:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:45.800 13:43:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:45.800 13:43:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:45.800 13:43:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:45.800 13:43:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:45.800 13:43:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:23:45.800 13:43:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:45.800 13:43:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:45.800 13:43:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:45.800 13:43:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:45.800 13:43:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:45.800 13:43:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:45.800 13:43:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:45.800 13:43:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:45.800 13:43:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:45.800 13:43:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:45.800 "name": "raid_bdev1", 00:23:45.800 "uuid": "7775a7ef-013a-4097-a70e-316f6a24007b", 00:23:45.800 "strip_size_kb": 0, 00:23:45.800 "state": "online", 00:23:45.800 "raid_level": "raid1", 00:23:45.800 "superblock": true, 00:23:45.800 "num_base_bdevs": 2, 00:23:45.800 "num_base_bdevs_discovered": 1, 00:23:45.800 "num_base_bdevs_operational": 1, 00:23:45.800 "base_bdevs_list": [ 00:23:45.800 { 00:23:45.800 "name": null, 00:23:45.800 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:45.800 "is_configured": false, 00:23:45.800 "data_offset": 256, 00:23:45.800 "data_size": 7936 00:23:45.800 }, 00:23:45.800 { 00:23:45.800 "name": "pt2", 00:23:45.800 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:45.800 "is_configured": true, 00:23:45.800 "data_offset": 256, 00:23:45.800 "data_size": 7936 00:23:45.800 } 00:23:45.800 ] 00:23:45.800 }' 00:23:45.800 13:43:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:45.800 13:43:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:46.367 13:43:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:23:46.367 13:43:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:46.367 13:43:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:46.367 [2024-11-20 13:43:45.575524] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:46.367 [2024-11-20 13:43:45.575563] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:46.367 [2024-11-20 13:43:45.575644] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:46.367 [2024-11-20 13:43:45.575701] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:46.367 [2024-11-20 13:43:45.575718] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:23:46.367 13:43:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:46.367 13:43:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:46.367 13:43:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:46.367 13:43:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:46.367 13:43:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:23:46.367 13:43:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:46.367 13:43:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:23:46.367 13:43:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:23:46.367 13:43:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:23:46.367 13:43:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:23:46.367 13:43:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:46.367 13:43:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:46.367 [2024-11-20 13:43:45.647477] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:23:46.367 [2024-11-20 13:43:45.647721] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:46.367 [2024-11-20 13:43:45.647816] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:23:46.367 [2024-11-20 13:43:45.647910] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:46.367 [2024-11-20 13:43:45.650315] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:46.367 [2024-11-20 13:43:45.650447] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:23:46.367 [2024-11-20 13:43:45.650569] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:23:46.367 [2024-11-20 13:43:45.650632] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:23:46.367 [2024-11-20 13:43:45.650736] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:23:46.367 [2024-11-20 13:43:45.650755] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:46.367 [2024-11-20 13:43:45.650778] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:23:46.367 [2024-11-20 13:43:45.650835] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:23:46.367 [2024-11-20 13:43:45.650911] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:23:46.367 [2024-11-20 13:43:45.650921] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:23:46.368 [2024-11-20 13:43:45.650997] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:23:46.368 [2024-11-20 13:43:45.651053] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:23:46.368 [2024-11-20 13:43:45.651079] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:23:46.368 [2024-11-20 13:43:45.651157] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:46.368 pt1 00:23:46.368 13:43:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:46.368 13:43:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:23:46.368 13:43:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:46.368 13:43:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:46.368 13:43:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:46.368 13:43:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:46.368 13:43:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:46.368 13:43:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:23:46.368 13:43:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:46.368 13:43:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:46.368 13:43:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:46.368 13:43:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:46.368 13:43:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:46.368 13:43:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:46.368 13:43:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:46.368 13:43:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:46.368 13:43:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:46.368 13:43:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:46.368 "name": "raid_bdev1", 00:23:46.368 "uuid": "7775a7ef-013a-4097-a70e-316f6a24007b", 00:23:46.368 "strip_size_kb": 0, 00:23:46.368 "state": "online", 00:23:46.368 "raid_level": "raid1", 00:23:46.368 "superblock": true, 00:23:46.368 "num_base_bdevs": 2, 00:23:46.368 "num_base_bdevs_discovered": 1, 00:23:46.368 "num_base_bdevs_operational": 1, 00:23:46.368 "base_bdevs_list": [ 00:23:46.368 { 00:23:46.368 "name": null, 00:23:46.368 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:46.368 "is_configured": false, 00:23:46.368 "data_offset": 256, 00:23:46.368 "data_size": 7936 00:23:46.368 }, 00:23:46.368 { 00:23:46.368 "name": "pt2", 00:23:46.368 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:46.368 "is_configured": true, 00:23:46.368 "data_offset": 256, 00:23:46.368 "data_size": 7936 00:23:46.368 } 00:23:46.368 ] 00:23:46.368 }' 00:23:46.368 13:43:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:46.368 13:43:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:46.626 13:43:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:23:46.626 13:43:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:46.626 13:43:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:46.626 13:43:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:23:46.626 13:43:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:46.626 13:43:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:23:46.885 13:43:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:23:46.885 13:43:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:46.885 13:43:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:46.885 13:43:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:23:46.885 [2024-11-20 13:43:46.114998] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:46.885 13:43:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:46.885 13:43:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # '[' 7775a7ef-013a-4097-a70e-316f6a24007b '!=' 7775a7ef-013a-4097-a70e-316f6a24007b ']' 00:23:46.885 13:43:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@563 -- # killprocess 88512 00:23:46.885 13:43:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@954 -- # '[' -z 88512 ']' 00:23:46.885 13:43:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@958 -- # kill -0 88512 00:23:46.885 13:43:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@959 -- # uname 00:23:46.885 13:43:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:46.885 13:43:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88512 00:23:46.885 13:43:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:46.885 13:43:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:46.885 killing process with pid 88512 00:23:46.885 13:43:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88512' 00:23:46.885 13:43:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@973 -- # kill 88512 00:23:46.885 [2024-11-20 13:43:46.200068] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:46.885 13:43:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@978 -- # wait 88512 00:23:46.885 [2024-11-20 13:43:46.200183] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:46.885 [2024-11-20 13:43:46.200235] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:46.885 [2024-11-20 13:43:46.200253] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:23:47.144 [2024-11-20 13:43:46.413453] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:48.109 13:43:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@565 -- # return 0 00:23:48.109 00:23:48.109 real 0m6.012s 00:23:48.109 user 0m8.994s 00:23:48.109 sys 0m1.204s 00:23:48.109 13:43:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:48.109 ************************************ 00:23:48.109 END TEST raid_superblock_test_md_interleaved 00:23:48.109 ************************************ 00:23:48.109 13:43:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:48.368 13:43:47 bdev_raid -- bdev/bdev_raid.sh@1013 -- # run_test raid_rebuild_test_sb_md_interleaved raid_rebuild_test raid1 2 true false false 00:23:48.368 13:43:47 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:23:48.368 13:43:47 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:48.368 13:43:47 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:23:48.368 ************************************ 00:23:48.368 START TEST raid_rebuild_test_sb_md_interleaved 00:23:48.368 ************************************ 00:23:48.368 13:43:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false false 00:23:48.368 13:43:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:23:48.368 13:43:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:23:48.368 13:43:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:23:48.368 13:43:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:23:48.368 13:43:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # local verify=false 00:23:48.368 13:43:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:23:48.368 13:43:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:23:48.368 13:43:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:23:48.368 13:43:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:23:48.368 13:43:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:23:48.368 13:43:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:23:48.368 13:43:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:23:48.368 13:43:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:23:48.368 13:43:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:23:48.368 13:43:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:23:48.368 13:43:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:23:48.368 13:43:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # local strip_size 00:23:48.368 13:43:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@577 -- # local create_arg 00:23:48.368 13:43:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:23:48.368 13:43:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@579 -- # local data_offset 00:23:48.368 13:43:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:23:48.368 13:43:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:23:48.368 13:43:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:23:48.368 13:43:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:23:48.368 13:43:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@597 -- # raid_pid=88835 00:23:48.368 13:43:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:23:48.368 13:43:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@598 -- # waitforlisten 88835 00:23:48.368 13:43:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@835 -- # '[' -z 88835 ']' 00:23:48.368 13:43:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:48.368 13:43:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:48.368 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:48.368 13:43:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:48.368 13:43:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:48.368 13:43:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:48.368 [2024-11-20 13:43:47.737951] Starting SPDK v25.01-pre git sha1 82b85d9ca / DPDK 24.03.0 initialization... 00:23:48.368 I/O size of 3145728 is greater than zero copy threshold (65536). 00:23:48.368 Zero copy mechanism will not be used. 00:23:48.368 [2024-11-20 13:43:47.738120] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88835 ] 00:23:48.628 [2024-11-20 13:43:47.919378] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:48.628 [2024-11-20 13:43:48.039194] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:48.888 [2024-11-20 13:43:48.243620] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:48.888 [2024-11-20 13:43:48.243697] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:49.147 13:43:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:49.147 13:43:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@868 -- # return 0 00:23:49.147 13:43:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:23:49.147 13:43:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1_malloc 00:23:49.147 13:43:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:49.147 13:43:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:49.407 BaseBdev1_malloc 00:23:49.407 13:43:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:49.407 13:43:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:23:49.407 13:43:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:49.407 13:43:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:49.407 [2024-11-20 13:43:48.650779] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:23:49.407 [2024-11-20 13:43:48.650872] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:49.407 [2024-11-20 13:43:48.650898] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:23:49.407 [2024-11-20 13:43:48.650914] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:49.407 [2024-11-20 13:43:48.653101] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:49.407 [2024-11-20 13:43:48.653158] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:23:49.407 BaseBdev1 00:23:49.407 13:43:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:49.407 13:43:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:23:49.407 13:43:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2_malloc 00:23:49.407 13:43:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:49.407 13:43:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:49.407 BaseBdev2_malloc 00:23:49.407 13:43:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:49.407 13:43:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:23:49.407 13:43:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:49.407 13:43:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:49.407 [2024-11-20 13:43:48.704290] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:23:49.407 [2024-11-20 13:43:48.704373] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:49.407 [2024-11-20 13:43:48.704395] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:23:49.407 [2024-11-20 13:43:48.704412] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:49.407 [2024-11-20 13:43:48.706530] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:49.407 [2024-11-20 13:43:48.706576] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:23:49.407 BaseBdev2 00:23:49.407 13:43:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:49.407 13:43:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b spare_malloc 00:23:49.407 13:43:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:49.407 13:43:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:49.407 spare_malloc 00:23:49.407 13:43:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:49.407 13:43:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:23:49.407 13:43:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:49.407 13:43:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:49.407 spare_delay 00:23:49.407 13:43:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:49.407 13:43:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:23:49.407 13:43:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:49.407 13:43:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:49.407 [2024-11-20 13:43:48.785500] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:23:49.407 [2024-11-20 13:43:48.785582] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:49.407 [2024-11-20 13:43:48.785609] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:23:49.407 [2024-11-20 13:43:48.785623] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:49.407 [2024-11-20 13:43:48.787907] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:49.407 [2024-11-20 13:43:48.787959] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:23:49.407 spare 00:23:49.407 13:43:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:49.407 13:43:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:23:49.407 13:43:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:49.407 13:43:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:49.407 [2024-11-20 13:43:48.797536] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:49.407 [2024-11-20 13:43:48.799628] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:49.407 [2024-11-20 13:43:48.799827] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:23:49.408 [2024-11-20 13:43:48.799845] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:23:49.408 [2024-11-20 13:43:48.799933] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:23:49.408 [2024-11-20 13:43:48.800006] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:23:49.408 [2024-11-20 13:43:48.800015] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:23:49.408 [2024-11-20 13:43:48.800109] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:49.408 13:43:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:49.408 13:43:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:23:49.408 13:43:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:49.408 13:43:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:49.408 13:43:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:49.408 13:43:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:49.408 13:43:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:23:49.408 13:43:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:49.408 13:43:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:49.408 13:43:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:49.408 13:43:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:49.408 13:43:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:49.408 13:43:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:49.408 13:43:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:49.408 13:43:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:49.408 13:43:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:49.408 13:43:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:49.408 "name": "raid_bdev1", 00:23:49.408 "uuid": "cb50949b-6fc2-482f-994a-e51b07b6e6c8", 00:23:49.408 "strip_size_kb": 0, 00:23:49.408 "state": "online", 00:23:49.408 "raid_level": "raid1", 00:23:49.408 "superblock": true, 00:23:49.408 "num_base_bdevs": 2, 00:23:49.408 "num_base_bdevs_discovered": 2, 00:23:49.408 "num_base_bdevs_operational": 2, 00:23:49.408 "base_bdevs_list": [ 00:23:49.408 { 00:23:49.408 "name": "BaseBdev1", 00:23:49.408 "uuid": "82502ba8-29d1-5994-88f8-09ed2bd70007", 00:23:49.408 "is_configured": true, 00:23:49.408 "data_offset": 256, 00:23:49.408 "data_size": 7936 00:23:49.408 }, 00:23:49.408 { 00:23:49.408 "name": "BaseBdev2", 00:23:49.408 "uuid": "507bef4c-9cc7-559c-9c6a-1fb55948d6eb", 00:23:49.408 "is_configured": true, 00:23:49.408 "data_offset": 256, 00:23:49.408 "data_size": 7936 00:23:49.408 } 00:23:49.408 ] 00:23:49.408 }' 00:23:49.408 13:43:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:49.408 13:43:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:49.975 13:43:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:23:49.975 13:43:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:49.975 13:43:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:49.975 13:43:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:23:49.975 [2024-11-20 13:43:49.245292] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:49.975 13:43:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:49.975 13:43:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:23:49.975 13:43:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:49.975 13:43:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:49.975 13:43:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:49.975 13:43:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:23:49.975 13:43:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:49.975 13:43:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:23:49.975 13:43:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:23:49.975 13:43:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@624 -- # '[' false = true ']' 00:23:49.975 13:43:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:23:49.975 13:43:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:49.975 13:43:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:49.975 [2024-11-20 13:43:49.316855] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:23:49.975 13:43:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:49.975 13:43:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:49.975 13:43:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:49.975 13:43:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:49.975 13:43:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:49.975 13:43:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:49.975 13:43:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:23:49.975 13:43:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:49.975 13:43:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:49.975 13:43:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:49.975 13:43:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:49.975 13:43:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:49.975 13:43:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:49.975 13:43:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:49.975 13:43:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:49.975 13:43:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:49.975 13:43:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:49.975 "name": "raid_bdev1", 00:23:49.975 "uuid": "cb50949b-6fc2-482f-994a-e51b07b6e6c8", 00:23:49.975 "strip_size_kb": 0, 00:23:49.975 "state": "online", 00:23:49.975 "raid_level": "raid1", 00:23:49.975 "superblock": true, 00:23:49.975 "num_base_bdevs": 2, 00:23:49.975 "num_base_bdevs_discovered": 1, 00:23:49.975 "num_base_bdevs_operational": 1, 00:23:49.975 "base_bdevs_list": [ 00:23:49.975 { 00:23:49.975 "name": null, 00:23:49.975 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:49.975 "is_configured": false, 00:23:49.975 "data_offset": 0, 00:23:49.975 "data_size": 7936 00:23:49.975 }, 00:23:49.975 { 00:23:49.975 "name": "BaseBdev2", 00:23:49.975 "uuid": "507bef4c-9cc7-559c-9c6a-1fb55948d6eb", 00:23:49.975 "is_configured": true, 00:23:49.975 "data_offset": 256, 00:23:49.975 "data_size": 7936 00:23:49.975 } 00:23:49.975 ] 00:23:49.975 }' 00:23:49.975 13:43:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:49.975 13:43:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:50.543 13:43:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:23:50.543 13:43:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:50.543 13:43:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:50.543 [2024-11-20 13:43:49.760244] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:50.543 [2024-11-20 13:43:49.778000] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:23:50.543 13:43:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:50.543 13:43:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@647 -- # sleep 1 00:23:50.543 [2024-11-20 13:43:49.780284] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:23:51.478 13:43:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:51.478 13:43:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:51.478 13:43:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:23:51.478 13:43:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:23:51.478 13:43:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:51.478 13:43:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:51.478 13:43:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:51.478 13:43:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:51.478 13:43:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:51.478 13:43:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:51.478 13:43:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:51.478 "name": "raid_bdev1", 00:23:51.478 "uuid": "cb50949b-6fc2-482f-994a-e51b07b6e6c8", 00:23:51.478 "strip_size_kb": 0, 00:23:51.478 "state": "online", 00:23:51.478 "raid_level": "raid1", 00:23:51.478 "superblock": true, 00:23:51.478 "num_base_bdevs": 2, 00:23:51.478 "num_base_bdevs_discovered": 2, 00:23:51.478 "num_base_bdevs_operational": 2, 00:23:51.478 "process": { 00:23:51.478 "type": "rebuild", 00:23:51.478 "target": "spare", 00:23:51.478 "progress": { 00:23:51.478 "blocks": 2560, 00:23:51.478 "percent": 32 00:23:51.478 } 00:23:51.478 }, 00:23:51.478 "base_bdevs_list": [ 00:23:51.478 { 00:23:51.478 "name": "spare", 00:23:51.478 "uuid": "0b49559d-2374-578f-a842-2bb1649cd41f", 00:23:51.478 "is_configured": true, 00:23:51.478 "data_offset": 256, 00:23:51.478 "data_size": 7936 00:23:51.478 }, 00:23:51.478 { 00:23:51.478 "name": "BaseBdev2", 00:23:51.478 "uuid": "507bef4c-9cc7-559c-9c6a-1fb55948d6eb", 00:23:51.478 "is_configured": true, 00:23:51.478 "data_offset": 256, 00:23:51.478 "data_size": 7936 00:23:51.478 } 00:23:51.478 ] 00:23:51.478 }' 00:23:51.478 13:43:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:51.478 13:43:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:51.478 13:43:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:51.478 13:43:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:23:51.478 13:43:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:23:51.478 13:43:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:51.478 13:43:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:51.478 [2024-11-20 13:43:50.927815] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:51.737 [2024-11-20 13:43:50.985957] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:23:51.737 [2024-11-20 13:43:50.986055] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:51.737 [2024-11-20 13:43:50.986089] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:51.737 [2024-11-20 13:43:50.986106] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:23:51.737 13:43:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:51.737 13:43:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:51.737 13:43:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:51.737 13:43:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:51.737 13:43:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:51.737 13:43:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:51.737 13:43:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:23:51.737 13:43:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:51.737 13:43:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:51.737 13:43:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:51.737 13:43:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:51.737 13:43:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:51.737 13:43:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:51.737 13:43:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:51.737 13:43:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:51.737 13:43:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:51.737 13:43:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:51.737 "name": "raid_bdev1", 00:23:51.737 "uuid": "cb50949b-6fc2-482f-994a-e51b07b6e6c8", 00:23:51.737 "strip_size_kb": 0, 00:23:51.737 "state": "online", 00:23:51.737 "raid_level": "raid1", 00:23:51.737 "superblock": true, 00:23:51.737 "num_base_bdevs": 2, 00:23:51.737 "num_base_bdevs_discovered": 1, 00:23:51.737 "num_base_bdevs_operational": 1, 00:23:51.737 "base_bdevs_list": [ 00:23:51.737 { 00:23:51.737 "name": null, 00:23:51.737 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:51.737 "is_configured": false, 00:23:51.737 "data_offset": 0, 00:23:51.737 "data_size": 7936 00:23:51.737 }, 00:23:51.737 { 00:23:51.737 "name": "BaseBdev2", 00:23:51.737 "uuid": "507bef4c-9cc7-559c-9c6a-1fb55948d6eb", 00:23:51.737 "is_configured": true, 00:23:51.737 "data_offset": 256, 00:23:51.737 "data_size": 7936 00:23:51.737 } 00:23:51.737 ] 00:23:51.737 }' 00:23:51.737 13:43:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:51.737 13:43:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:51.997 13:43:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:51.997 13:43:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:51.997 13:43:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:23:51.997 13:43:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:23:51.997 13:43:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:51.997 13:43:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:51.997 13:43:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:51.997 13:43:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:51.997 13:43:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:52.256 13:43:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:52.257 13:43:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:52.257 "name": "raid_bdev1", 00:23:52.257 "uuid": "cb50949b-6fc2-482f-994a-e51b07b6e6c8", 00:23:52.257 "strip_size_kb": 0, 00:23:52.257 "state": "online", 00:23:52.257 "raid_level": "raid1", 00:23:52.257 "superblock": true, 00:23:52.257 "num_base_bdevs": 2, 00:23:52.257 "num_base_bdevs_discovered": 1, 00:23:52.257 "num_base_bdevs_operational": 1, 00:23:52.257 "base_bdevs_list": [ 00:23:52.257 { 00:23:52.257 "name": null, 00:23:52.257 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:52.257 "is_configured": false, 00:23:52.257 "data_offset": 0, 00:23:52.257 "data_size": 7936 00:23:52.257 }, 00:23:52.257 { 00:23:52.257 "name": "BaseBdev2", 00:23:52.257 "uuid": "507bef4c-9cc7-559c-9c6a-1fb55948d6eb", 00:23:52.257 "is_configured": true, 00:23:52.257 "data_offset": 256, 00:23:52.257 "data_size": 7936 00:23:52.257 } 00:23:52.257 ] 00:23:52.257 }' 00:23:52.257 13:43:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:52.257 13:43:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:23:52.257 13:43:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:52.257 13:43:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:23:52.257 13:43:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:23:52.257 13:43:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:52.257 13:43:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:52.257 [2024-11-20 13:43:51.595969] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:52.257 [2024-11-20 13:43:51.613211] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:23:52.257 13:43:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:52.257 13:43:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@663 -- # sleep 1 00:23:52.257 [2024-11-20 13:43:51.615738] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:23:53.195 13:43:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:53.195 13:43:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:53.195 13:43:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:23:53.195 13:43:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:23:53.195 13:43:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:53.195 13:43:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:53.195 13:43:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:53.195 13:43:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:53.195 13:43:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:53.195 13:43:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:53.195 13:43:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:53.195 "name": "raid_bdev1", 00:23:53.195 "uuid": "cb50949b-6fc2-482f-994a-e51b07b6e6c8", 00:23:53.195 "strip_size_kb": 0, 00:23:53.195 "state": "online", 00:23:53.195 "raid_level": "raid1", 00:23:53.195 "superblock": true, 00:23:53.195 "num_base_bdevs": 2, 00:23:53.195 "num_base_bdevs_discovered": 2, 00:23:53.195 "num_base_bdevs_operational": 2, 00:23:53.195 "process": { 00:23:53.195 "type": "rebuild", 00:23:53.195 "target": "spare", 00:23:53.195 "progress": { 00:23:53.195 "blocks": 2560, 00:23:53.195 "percent": 32 00:23:53.195 } 00:23:53.195 }, 00:23:53.195 "base_bdevs_list": [ 00:23:53.195 { 00:23:53.195 "name": "spare", 00:23:53.195 "uuid": "0b49559d-2374-578f-a842-2bb1649cd41f", 00:23:53.195 "is_configured": true, 00:23:53.196 "data_offset": 256, 00:23:53.196 "data_size": 7936 00:23:53.196 }, 00:23:53.196 { 00:23:53.196 "name": "BaseBdev2", 00:23:53.196 "uuid": "507bef4c-9cc7-559c-9c6a-1fb55948d6eb", 00:23:53.196 "is_configured": true, 00:23:53.196 "data_offset": 256, 00:23:53.196 "data_size": 7936 00:23:53.196 } 00:23:53.196 ] 00:23:53.196 }' 00:23:53.196 13:43:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:53.454 13:43:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:53.455 13:43:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:53.455 13:43:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:23:53.455 13:43:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:23:53.455 13:43:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:23:53.455 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:23:53.455 13:43:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:23:53.455 13:43:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:23:53.455 13:43:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:23:53.455 13:43:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@706 -- # local timeout=740 00:23:53.455 13:43:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:23:53.455 13:43:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:53.455 13:43:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:53.455 13:43:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:23:53.455 13:43:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:23:53.455 13:43:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:53.455 13:43:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:53.455 13:43:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:53.455 13:43:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:53.455 13:43:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:53.455 13:43:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:53.455 13:43:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:53.455 "name": "raid_bdev1", 00:23:53.455 "uuid": "cb50949b-6fc2-482f-994a-e51b07b6e6c8", 00:23:53.455 "strip_size_kb": 0, 00:23:53.455 "state": "online", 00:23:53.455 "raid_level": "raid1", 00:23:53.455 "superblock": true, 00:23:53.455 "num_base_bdevs": 2, 00:23:53.455 "num_base_bdevs_discovered": 2, 00:23:53.455 "num_base_bdevs_operational": 2, 00:23:53.455 "process": { 00:23:53.455 "type": "rebuild", 00:23:53.455 "target": "spare", 00:23:53.455 "progress": { 00:23:53.455 "blocks": 2816, 00:23:53.455 "percent": 35 00:23:53.455 } 00:23:53.455 }, 00:23:53.455 "base_bdevs_list": [ 00:23:53.455 { 00:23:53.455 "name": "spare", 00:23:53.455 "uuid": "0b49559d-2374-578f-a842-2bb1649cd41f", 00:23:53.455 "is_configured": true, 00:23:53.455 "data_offset": 256, 00:23:53.455 "data_size": 7936 00:23:53.455 }, 00:23:53.455 { 00:23:53.455 "name": "BaseBdev2", 00:23:53.455 "uuid": "507bef4c-9cc7-559c-9c6a-1fb55948d6eb", 00:23:53.455 "is_configured": true, 00:23:53.455 "data_offset": 256, 00:23:53.455 "data_size": 7936 00:23:53.455 } 00:23:53.455 ] 00:23:53.455 }' 00:23:53.455 13:43:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:53.455 13:43:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:53.455 13:43:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:53.455 13:43:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:23:53.455 13:43:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:23:54.834 13:43:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:23:54.834 13:43:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:54.834 13:43:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:54.834 13:43:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:23:54.834 13:43:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:23:54.834 13:43:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:54.834 13:43:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:54.834 13:43:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:54.834 13:43:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:54.834 13:43:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:54.834 13:43:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:54.834 13:43:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:54.834 "name": "raid_bdev1", 00:23:54.834 "uuid": "cb50949b-6fc2-482f-994a-e51b07b6e6c8", 00:23:54.834 "strip_size_kb": 0, 00:23:54.834 "state": "online", 00:23:54.834 "raid_level": "raid1", 00:23:54.834 "superblock": true, 00:23:54.834 "num_base_bdevs": 2, 00:23:54.834 "num_base_bdevs_discovered": 2, 00:23:54.834 "num_base_bdevs_operational": 2, 00:23:54.834 "process": { 00:23:54.834 "type": "rebuild", 00:23:54.834 "target": "spare", 00:23:54.834 "progress": { 00:23:54.834 "blocks": 5632, 00:23:54.834 "percent": 70 00:23:54.834 } 00:23:54.834 }, 00:23:54.834 "base_bdevs_list": [ 00:23:54.834 { 00:23:54.834 "name": "spare", 00:23:54.834 "uuid": "0b49559d-2374-578f-a842-2bb1649cd41f", 00:23:54.834 "is_configured": true, 00:23:54.834 "data_offset": 256, 00:23:54.834 "data_size": 7936 00:23:54.834 }, 00:23:54.834 { 00:23:54.834 "name": "BaseBdev2", 00:23:54.834 "uuid": "507bef4c-9cc7-559c-9c6a-1fb55948d6eb", 00:23:54.834 "is_configured": true, 00:23:54.834 "data_offset": 256, 00:23:54.834 "data_size": 7936 00:23:54.834 } 00:23:54.834 ] 00:23:54.834 }' 00:23:54.834 13:43:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:54.834 13:43:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:54.834 13:43:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:54.834 13:43:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:23:54.834 13:43:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:23:55.402 [2024-11-20 13:43:54.730251] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:23:55.402 [2024-11-20 13:43:54.730618] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:23:55.402 [2024-11-20 13:43:54.730754] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:55.661 13:43:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:23:55.661 13:43:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:55.661 13:43:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:55.661 13:43:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:23:55.661 13:43:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:23:55.661 13:43:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:55.661 13:43:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:55.661 13:43:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:55.661 13:43:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:55.661 13:43:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:55.661 13:43:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:55.661 13:43:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:55.661 "name": "raid_bdev1", 00:23:55.661 "uuid": "cb50949b-6fc2-482f-994a-e51b07b6e6c8", 00:23:55.661 "strip_size_kb": 0, 00:23:55.661 "state": "online", 00:23:55.661 "raid_level": "raid1", 00:23:55.661 "superblock": true, 00:23:55.661 "num_base_bdevs": 2, 00:23:55.662 "num_base_bdevs_discovered": 2, 00:23:55.662 "num_base_bdevs_operational": 2, 00:23:55.662 "base_bdevs_list": [ 00:23:55.662 { 00:23:55.662 "name": "spare", 00:23:55.662 "uuid": "0b49559d-2374-578f-a842-2bb1649cd41f", 00:23:55.662 "is_configured": true, 00:23:55.662 "data_offset": 256, 00:23:55.662 "data_size": 7936 00:23:55.662 }, 00:23:55.662 { 00:23:55.662 "name": "BaseBdev2", 00:23:55.662 "uuid": "507bef4c-9cc7-559c-9c6a-1fb55948d6eb", 00:23:55.662 "is_configured": true, 00:23:55.662 "data_offset": 256, 00:23:55.662 "data_size": 7936 00:23:55.662 } 00:23:55.662 ] 00:23:55.662 }' 00:23:55.662 13:43:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:55.662 13:43:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:23:55.662 13:43:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:55.921 13:43:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:23:55.921 13:43:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@709 -- # break 00:23:55.921 13:43:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:55.921 13:43:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:55.921 13:43:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:23:55.921 13:43:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:23:55.921 13:43:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:55.921 13:43:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:55.921 13:43:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:55.921 13:43:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:55.921 13:43:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:55.921 13:43:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:55.921 13:43:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:55.921 "name": "raid_bdev1", 00:23:55.921 "uuid": "cb50949b-6fc2-482f-994a-e51b07b6e6c8", 00:23:55.921 "strip_size_kb": 0, 00:23:55.921 "state": "online", 00:23:55.921 "raid_level": "raid1", 00:23:55.921 "superblock": true, 00:23:55.921 "num_base_bdevs": 2, 00:23:55.921 "num_base_bdevs_discovered": 2, 00:23:55.921 "num_base_bdevs_operational": 2, 00:23:55.921 "base_bdevs_list": [ 00:23:55.921 { 00:23:55.921 "name": "spare", 00:23:55.921 "uuid": "0b49559d-2374-578f-a842-2bb1649cd41f", 00:23:55.921 "is_configured": true, 00:23:55.921 "data_offset": 256, 00:23:55.921 "data_size": 7936 00:23:55.921 }, 00:23:55.921 { 00:23:55.921 "name": "BaseBdev2", 00:23:55.921 "uuid": "507bef4c-9cc7-559c-9c6a-1fb55948d6eb", 00:23:55.921 "is_configured": true, 00:23:55.921 "data_offset": 256, 00:23:55.921 "data_size": 7936 00:23:55.921 } 00:23:55.921 ] 00:23:55.921 }' 00:23:55.921 13:43:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:55.921 13:43:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:23:55.921 13:43:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:55.921 13:43:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:23:55.921 13:43:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:23:55.921 13:43:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:55.921 13:43:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:55.921 13:43:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:55.921 13:43:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:55.921 13:43:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:23:55.921 13:43:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:55.921 13:43:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:55.921 13:43:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:55.921 13:43:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:55.921 13:43:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:55.921 13:43:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:55.921 13:43:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:55.921 13:43:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:55.921 13:43:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:55.921 13:43:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:55.921 "name": "raid_bdev1", 00:23:55.921 "uuid": "cb50949b-6fc2-482f-994a-e51b07b6e6c8", 00:23:55.921 "strip_size_kb": 0, 00:23:55.921 "state": "online", 00:23:55.921 "raid_level": "raid1", 00:23:55.921 "superblock": true, 00:23:55.921 "num_base_bdevs": 2, 00:23:55.921 "num_base_bdevs_discovered": 2, 00:23:55.921 "num_base_bdevs_operational": 2, 00:23:55.921 "base_bdevs_list": [ 00:23:55.921 { 00:23:55.921 "name": "spare", 00:23:55.921 "uuid": "0b49559d-2374-578f-a842-2bb1649cd41f", 00:23:55.922 "is_configured": true, 00:23:55.922 "data_offset": 256, 00:23:55.922 "data_size": 7936 00:23:55.922 }, 00:23:55.922 { 00:23:55.922 "name": "BaseBdev2", 00:23:55.922 "uuid": "507bef4c-9cc7-559c-9c6a-1fb55948d6eb", 00:23:55.922 "is_configured": true, 00:23:55.922 "data_offset": 256, 00:23:55.922 "data_size": 7936 00:23:55.922 } 00:23:55.922 ] 00:23:55.922 }' 00:23:55.922 13:43:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:55.922 13:43:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:56.490 13:43:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:23:56.490 13:43:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:56.490 13:43:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:56.490 [2024-11-20 13:43:55.738398] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:56.490 [2024-11-20 13:43:55.738618] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:56.490 [2024-11-20 13:43:55.738735] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:56.490 [2024-11-20 13:43:55.738807] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:56.490 [2024-11-20 13:43:55.738819] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:23:56.490 13:43:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:56.490 13:43:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:56.490 13:43:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:56.490 13:43:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:56.490 13:43:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # jq length 00:23:56.490 13:43:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:56.490 13:43:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:23:56.491 13:43:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:23:56.491 13:43:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:23:56.491 13:43:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:23:56.491 13:43:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:56.491 13:43:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:56.491 13:43:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:56.491 13:43:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:23:56.491 13:43:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:56.491 13:43:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:56.491 [2024-11-20 13:43:55.794379] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:23:56.491 [2024-11-20 13:43:55.794450] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:56.491 [2024-11-20 13:43:55.794476] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:23:56.491 [2024-11-20 13:43:55.794488] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:56.491 [2024-11-20 13:43:55.796704] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:56.491 [2024-11-20 13:43:55.796748] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:23:56.491 [2024-11-20 13:43:55.796814] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:23:56.491 [2024-11-20 13:43:55.796865] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:56.491 [2024-11-20 13:43:55.796977] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:56.491 spare 00:23:56.491 13:43:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:56.491 13:43:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:23:56.491 13:43:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:56.491 13:43:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:56.491 [2024-11-20 13:43:55.896910] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:23:56.491 [2024-11-20 13:43:55.896975] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:23:56.491 [2024-11-20 13:43:55.897134] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:23:56.491 [2024-11-20 13:43:55.897261] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:23:56.491 [2024-11-20 13:43:55.897274] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:23:56.491 [2024-11-20 13:43:55.897384] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:56.491 13:43:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:56.491 13:43:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:23:56.491 13:43:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:56.491 13:43:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:56.491 13:43:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:56.491 13:43:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:56.491 13:43:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:23:56.491 13:43:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:56.491 13:43:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:56.491 13:43:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:56.491 13:43:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:56.491 13:43:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:56.491 13:43:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:56.491 13:43:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:56.491 13:43:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:56.491 13:43:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:56.491 13:43:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:56.491 "name": "raid_bdev1", 00:23:56.491 "uuid": "cb50949b-6fc2-482f-994a-e51b07b6e6c8", 00:23:56.491 "strip_size_kb": 0, 00:23:56.491 "state": "online", 00:23:56.491 "raid_level": "raid1", 00:23:56.491 "superblock": true, 00:23:56.491 "num_base_bdevs": 2, 00:23:56.491 "num_base_bdevs_discovered": 2, 00:23:56.491 "num_base_bdevs_operational": 2, 00:23:56.491 "base_bdevs_list": [ 00:23:56.491 { 00:23:56.491 "name": "spare", 00:23:56.491 "uuid": "0b49559d-2374-578f-a842-2bb1649cd41f", 00:23:56.491 "is_configured": true, 00:23:56.491 "data_offset": 256, 00:23:56.491 "data_size": 7936 00:23:56.491 }, 00:23:56.491 { 00:23:56.491 "name": "BaseBdev2", 00:23:56.491 "uuid": "507bef4c-9cc7-559c-9c6a-1fb55948d6eb", 00:23:56.491 "is_configured": true, 00:23:56.491 "data_offset": 256, 00:23:56.491 "data_size": 7936 00:23:56.491 } 00:23:56.491 ] 00:23:56.491 }' 00:23:56.491 13:43:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:56.491 13:43:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:57.135 13:43:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:57.135 13:43:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:57.135 13:43:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:23:57.135 13:43:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:23:57.135 13:43:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:57.135 13:43:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:57.135 13:43:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:57.135 13:43:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:57.135 13:43:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:57.135 13:43:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:57.135 13:43:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:57.135 "name": "raid_bdev1", 00:23:57.135 "uuid": "cb50949b-6fc2-482f-994a-e51b07b6e6c8", 00:23:57.135 "strip_size_kb": 0, 00:23:57.135 "state": "online", 00:23:57.135 "raid_level": "raid1", 00:23:57.135 "superblock": true, 00:23:57.135 "num_base_bdevs": 2, 00:23:57.135 "num_base_bdevs_discovered": 2, 00:23:57.135 "num_base_bdevs_operational": 2, 00:23:57.135 "base_bdevs_list": [ 00:23:57.135 { 00:23:57.135 "name": "spare", 00:23:57.135 "uuid": "0b49559d-2374-578f-a842-2bb1649cd41f", 00:23:57.135 "is_configured": true, 00:23:57.135 "data_offset": 256, 00:23:57.135 "data_size": 7936 00:23:57.135 }, 00:23:57.135 { 00:23:57.135 "name": "BaseBdev2", 00:23:57.135 "uuid": "507bef4c-9cc7-559c-9c6a-1fb55948d6eb", 00:23:57.135 "is_configured": true, 00:23:57.135 "data_offset": 256, 00:23:57.135 "data_size": 7936 00:23:57.135 } 00:23:57.135 ] 00:23:57.135 }' 00:23:57.135 13:43:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:57.135 13:43:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:23:57.135 13:43:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:57.135 13:43:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:23:57.135 13:43:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:23:57.135 13:43:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:57.135 13:43:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:57.135 13:43:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:57.135 13:43:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:57.135 13:43:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:23:57.135 13:43:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:23:57.135 13:43:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:57.135 13:43:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:57.135 [2024-11-20 13:43:56.494457] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:57.135 13:43:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:57.135 13:43:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:57.135 13:43:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:57.135 13:43:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:57.135 13:43:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:57.135 13:43:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:57.135 13:43:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:23:57.135 13:43:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:57.135 13:43:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:57.135 13:43:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:57.135 13:43:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:57.135 13:43:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:57.135 13:43:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:57.135 13:43:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:57.135 13:43:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:57.135 13:43:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:57.135 13:43:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:57.135 "name": "raid_bdev1", 00:23:57.135 "uuid": "cb50949b-6fc2-482f-994a-e51b07b6e6c8", 00:23:57.135 "strip_size_kb": 0, 00:23:57.135 "state": "online", 00:23:57.135 "raid_level": "raid1", 00:23:57.135 "superblock": true, 00:23:57.135 "num_base_bdevs": 2, 00:23:57.135 "num_base_bdevs_discovered": 1, 00:23:57.135 "num_base_bdevs_operational": 1, 00:23:57.135 "base_bdevs_list": [ 00:23:57.135 { 00:23:57.135 "name": null, 00:23:57.135 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:57.135 "is_configured": false, 00:23:57.135 "data_offset": 0, 00:23:57.135 "data_size": 7936 00:23:57.135 }, 00:23:57.135 { 00:23:57.135 "name": "BaseBdev2", 00:23:57.135 "uuid": "507bef4c-9cc7-559c-9c6a-1fb55948d6eb", 00:23:57.135 "is_configured": true, 00:23:57.135 "data_offset": 256, 00:23:57.135 "data_size": 7936 00:23:57.135 } 00:23:57.135 ] 00:23:57.135 }' 00:23:57.135 13:43:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:57.135 13:43:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:57.703 13:43:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:23:57.703 13:43:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:57.703 13:43:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:57.703 [2024-11-20 13:43:56.930440] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:57.703 [2024-11-20 13:43:56.930637] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:23:57.703 [2024-11-20 13:43:56.930657] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:23:57.703 [2024-11-20 13:43:56.930705] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:57.703 [2024-11-20 13:43:56.947305] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:23:57.703 13:43:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:57.703 13:43:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@757 -- # sleep 1 00:23:57.703 [2024-11-20 13:43:56.949504] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:23:58.640 13:43:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:58.640 13:43:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:58.640 13:43:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:23:58.640 13:43:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:23:58.640 13:43:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:58.640 13:43:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:58.640 13:43:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:58.640 13:43:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:58.640 13:43:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:58.640 13:43:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:58.640 13:43:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:58.640 "name": "raid_bdev1", 00:23:58.640 "uuid": "cb50949b-6fc2-482f-994a-e51b07b6e6c8", 00:23:58.640 "strip_size_kb": 0, 00:23:58.640 "state": "online", 00:23:58.640 "raid_level": "raid1", 00:23:58.640 "superblock": true, 00:23:58.640 "num_base_bdevs": 2, 00:23:58.640 "num_base_bdevs_discovered": 2, 00:23:58.640 "num_base_bdevs_operational": 2, 00:23:58.640 "process": { 00:23:58.640 "type": "rebuild", 00:23:58.640 "target": "spare", 00:23:58.640 "progress": { 00:23:58.640 "blocks": 2560, 00:23:58.640 "percent": 32 00:23:58.640 } 00:23:58.640 }, 00:23:58.640 "base_bdevs_list": [ 00:23:58.640 { 00:23:58.641 "name": "spare", 00:23:58.641 "uuid": "0b49559d-2374-578f-a842-2bb1649cd41f", 00:23:58.641 "is_configured": true, 00:23:58.641 "data_offset": 256, 00:23:58.641 "data_size": 7936 00:23:58.641 }, 00:23:58.641 { 00:23:58.641 "name": "BaseBdev2", 00:23:58.641 "uuid": "507bef4c-9cc7-559c-9c6a-1fb55948d6eb", 00:23:58.641 "is_configured": true, 00:23:58.641 "data_offset": 256, 00:23:58.641 "data_size": 7936 00:23:58.641 } 00:23:58.641 ] 00:23:58.641 }' 00:23:58.641 13:43:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:58.641 13:43:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:58.641 13:43:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:58.641 13:43:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:23:58.641 13:43:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:23:58.641 13:43:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:58.641 13:43:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:58.641 [2024-11-20 13:43:58.081571] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:58.900 [2024-11-20 13:43:58.155114] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:23:58.900 [2024-11-20 13:43:58.155211] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:58.900 [2024-11-20 13:43:58.155228] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:58.900 [2024-11-20 13:43:58.155239] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:23:58.900 13:43:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:58.900 13:43:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:58.900 13:43:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:58.900 13:43:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:58.900 13:43:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:58.900 13:43:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:58.900 13:43:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:23:58.900 13:43:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:58.900 13:43:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:58.900 13:43:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:58.900 13:43:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:58.900 13:43:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:58.900 13:43:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:58.900 13:43:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:58.900 13:43:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:58.900 13:43:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:58.900 13:43:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:58.900 "name": "raid_bdev1", 00:23:58.900 "uuid": "cb50949b-6fc2-482f-994a-e51b07b6e6c8", 00:23:58.900 "strip_size_kb": 0, 00:23:58.900 "state": "online", 00:23:58.900 "raid_level": "raid1", 00:23:58.900 "superblock": true, 00:23:58.900 "num_base_bdevs": 2, 00:23:58.900 "num_base_bdevs_discovered": 1, 00:23:58.900 "num_base_bdevs_operational": 1, 00:23:58.900 "base_bdevs_list": [ 00:23:58.900 { 00:23:58.900 "name": null, 00:23:58.900 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:58.900 "is_configured": false, 00:23:58.900 "data_offset": 0, 00:23:58.900 "data_size": 7936 00:23:58.900 }, 00:23:58.900 { 00:23:58.900 "name": "BaseBdev2", 00:23:58.900 "uuid": "507bef4c-9cc7-559c-9c6a-1fb55948d6eb", 00:23:58.900 "is_configured": true, 00:23:58.900 "data_offset": 256, 00:23:58.900 "data_size": 7936 00:23:58.900 } 00:23:58.900 ] 00:23:58.900 }' 00:23:58.900 13:43:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:58.900 13:43:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:59.159 13:43:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:23:59.159 13:43:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:59.159 13:43:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:59.159 [2024-11-20 13:43:58.615534] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:23:59.159 [2024-11-20 13:43:58.615614] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:59.159 [2024-11-20 13:43:58.615643] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:23:59.159 [2024-11-20 13:43:58.615658] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:59.159 [2024-11-20 13:43:58.615850] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:59.159 [2024-11-20 13:43:58.615868] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:23:59.159 [2024-11-20 13:43:58.615926] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:23:59.159 [2024-11-20 13:43:58.615942] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:23:59.159 [2024-11-20 13:43:58.615955] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:23:59.159 [2024-11-20 13:43:58.615978] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:59.159 [2024-11-20 13:43:58.631513] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:23:59.159 spare 00:23:59.159 13:43:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:59.159 13:43:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@764 -- # sleep 1 00:23:59.159 [2024-11-20 13:43:58.633906] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:24:00.535 13:43:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:00.535 13:43:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:00.535 13:43:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:24:00.535 13:43:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:24:00.535 13:43:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:00.535 13:43:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:00.535 13:43:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:00.535 13:43:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:00.535 13:43:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:00.535 13:43:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:00.535 13:43:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:00.535 "name": "raid_bdev1", 00:24:00.535 "uuid": "cb50949b-6fc2-482f-994a-e51b07b6e6c8", 00:24:00.535 "strip_size_kb": 0, 00:24:00.535 "state": "online", 00:24:00.535 "raid_level": "raid1", 00:24:00.535 "superblock": true, 00:24:00.535 "num_base_bdevs": 2, 00:24:00.535 "num_base_bdevs_discovered": 2, 00:24:00.535 "num_base_bdevs_operational": 2, 00:24:00.535 "process": { 00:24:00.535 "type": "rebuild", 00:24:00.535 "target": "spare", 00:24:00.535 "progress": { 00:24:00.535 "blocks": 2560, 00:24:00.535 "percent": 32 00:24:00.535 } 00:24:00.535 }, 00:24:00.535 "base_bdevs_list": [ 00:24:00.535 { 00:24:00.535 "name": "spare", 00:24:00.535 "uuid": "0b49559d-2374-578f-a842-2bb1649cd41f", 00:24:00.535 "is_configured": true, 00:24:00.535 "data_offset": 256, 00:24:00.535 "data_size": 7936 00:24:00.535 }, 00:24:00.535 { 00:24:00.535 "name": "BaseBdev2", 00:24:00.535 "uuid": "507bef4c-9cc7-559c-9c6a-1fb55948d6eb", 00:24:00.535 "is_configured": true, 00:24:00.536 "data_offset": 256, 00:24:00.536 "data_size": 7936 00:24:00.536 } 00:24:00.536 ] 00:24:00.536 }' 00:24:00.536 13:43:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:00.536 13:43:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:00.536 13:43:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:00.536 13:43:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:24:00.536 13:43:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:24:00.536 13:43:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:00.536 13:43:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:00.536 [2024-11-20 13:43:59.766531] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:00.536 [2024-11-20 13:43:59.839529] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:24:00.536 [2024-11-20 13:43:59.839616] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:00.536 [2024-11-20 13:43:59.839636] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:00.536 [2024-11-20 13:43:59.839645] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:24:00.536 13:43:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:00.536 13:43:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:24:00.536 13:43:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:00.536 13:43:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:00.536 13:43:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:00.536 13:43:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:00.536 13:43:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:24:00.536 13:43:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:00.536 13:43:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:00.536 13:43:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:00.536 13:43:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:00.536 13:43:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:00.536 13:43:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:00.536 13:43:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:00.536 13:43:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:00.536 13:43:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:00.536 13:43:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:00.536 "name": "raid_bdev1", 00:24:00.536 "uuid": "cb50949b-6fc2-482f-994a-e51b07b6e6c8", 00:24:00.536 "strip_size_kb": 0, 00:24:00.536 "state": "online", 00:24:00.536 "raid_level": "raid1", 00:24:00.536 "superblock": true, 00:24:00.536 "num_base_bdevs": 2, 00:24:00.536 "num_base_bdevs_discovered": 1, 00:24:00.536 "num_base_bdevs_operational": 1, 00:24:00.536 "base_bdevs_list": [ 00:24:00.536 { 00:24:00.536 "name": null, 00:24:00.536 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:00.536 "is_configured": false, 00:24:00.536 "data_offset": 0, 00:24:00.536 "data_size": 7936 00:24:00.536 }, 00:24:00.536 { 00:24:00.536 "name": "BaseBdev2", 00:24:00.536 "uuid": "507bef4c-9cc7-559c-9c6a-1fb55948d6eb", 00:24:00.536 "is_configured": true, 00:24:00.536 "data_offset": 256, 00:24:00.536 "data_size": 7936 00:24:00.536 } 00:24:00.536 ] 00:24:00.536 }' 00:24:00.536 13:43:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:00.536 13:43:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:01.181 13:44:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:01.181 13:44:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:01.181 13:44:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:24:01.181 13:44:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:24:01.181 13:44:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:01.181 13:44:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:01.181 13:44:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:01.181 13:44:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:01.181 13:44:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:01.181 13:44:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:01.181 13:44:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:01.181 "name": "raid_bdev1", 00:24:01.181 "uuid": "cb50949b-6fc2-482f-994a-e51b07b6e6c8", 00:24:01.181 "strip_size_kb": 0, 00:24:01.181 "state": "online", 00:24:01.181 "raid_level": "raid1", 00:24:01.181 "superblock": true, 00:24:01.181 "num_base_bdevs": 2, 00:24:01.181 "num_base_bdevs_discovered": 1, 00:24:01.181 "num_base_bdevs_operational": 1, 00:24:01.181 "base_bdevs_list": [ 00:24:01.181 { 00:24:01.181 "name": null, 00:24:01.181 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:01.181 "is_configured": false, 00:24:01.181 "data_offset": 0, 00:24:01.181 "data_size": 7936 00:24:01.181 }, 00:24:01.181 { 00:24:01.181 "name": "BaseBdev2", 00:24:01.181 "uuid": "507bef4c-9cc7-559c-9c6a-1fb55948d6eb", 00:24:01.181 "is_configured": true, 00:24:01.181 "data_offset": 256, 00:24:01.181 "data_size": 7936 00:24:01.181 } 00:24:01.181 ] 00:24:01.181 }' 00:24:01.181 13:44:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:01.181 13:44:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:24:01.181 13:44:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:01.181 13:44:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:24:01.181 13:44:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:24:01.181 13:44:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:01.181 13:44:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:01.181 13:44:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:01.181 13:44:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:24:01.181 13:44:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:01.181 13:44:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:01.181 [2024-11-20 13:44:00.426569] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:24:01.181 [2024-11-20 13:44:00.426759] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:01.181 [2024-11-20 13:44:00.426792] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:24:01.181 [2024-11-20 13:44:00.426804] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:01.181 [2024-11-20 13:44:00.426984] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:01.181 [2024-11-20 13:44:00.427000] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:24:01.181 [2024-11-20 13:44:00.427078] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:24:01.181 [2024-11-20 13:44:00.427093] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:24:01.181 [2024-11-20 13:44:00.427106] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:24:01.181 [2024-11-20 13:44:00.427118] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:24:01.181 BaseBdev1 00:24:01.181 13:44:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:01.181 13:44:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@775 -- # sleep 1 00:24:02.118 13:44:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:24:02.118 13:44:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:02.118 13:44:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:02.118 13:44:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:02.118 13:44:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:02.118 13:44:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:24:02.118 13:44:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:02.118 13:44:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:02.118 13:44:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:02.118 13:44:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:02.118 13:44:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:02.118 13:44:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:02.118 13:44:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:02.118 13:44:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:02.118 13:44:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:02.119 13:44:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:02.119 "name": "raid_bdev1", 00:24:02.119 "uuid": "cb50949b-6fc2-482f-994a-e51b07b6e6c8", 00:24:02.119 "strip_size_kb": 0, 00:24:02.119 "state": "online", 00:24:02.119 "raid_level": "raid1", 00:24:02.119 "superblock": true, 00:24:02.119 "num_base_bdevs": 2, 00:24:02.119 "num_base_bdevs_discovered": 1, 00:24:02.119 "num_base_bdevs_operational": 1, 00:24:02.119 "base_bdevs_list": [ 00:24:02.119 { 00:24:02.119 "name": null, 00:24:02.119 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:02.119 "is_configured": false, 00:24:02.119 "data_offset": 0, 00:24:02.119 "data_size": 7936 00:24:02.119 }, 00:24:02.119 { 00:24:02.119 "name": "BaseBdev2", 00:24:02.119 "uuid": "507bef4c-9cc7-559c-9c6a-1fb55948d6eb", 00:24:02.119 "is_configured": true, 00:24:02.119 "data_offset": 256, 00:24:02.119 "data_size": 7936 00:24:02.119 } 00:24:02.119 ] 00:24:02.119 }' 00:24:02.119 13:44:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:02.119 13:44:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:02.379 13:44:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:02.379 13:44:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:02.379 13:44:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:24:02.379 13:44:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:24:02.379 13:44:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:02.379 13:44:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:02.379 13:44:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:02.379 13:44:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:02.379 13:44:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:02.379 13:44:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:02.379 13:44:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:02.379 "name": "raid_bdev1", 00:24:02.379 "uuid": "cb50949b-6fc2-482f-994a-e51b07b6e6c8", 00:24:02.379 "strip_size_kb": 0, 00:24:02.379 "state": "online", 00:24:02.379 "raid_level": "raid1", 00:24:02.379 "superblock": true, 00:24:02.379 "num_base_bdevs": 2, 00:24:02.379 "num_base_bdevs_discovered": 1, 00:24:02.379 "num_base_bdevs_operational": 1, 00:24:02.379 "base_bdevs_list": [ 00:24:02.379 { 00:24:02.379 "name": null, 00:24:02.379 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:02.379 "is_configured": false, 00:24:02.379 "data_offset": 0, 00:24:02.379 "data_size": 7936 00:24:02.379 }, 00:24:02.379 { 00:24:02.379 "name": "BaseBdev2", 00:24:02.379 "uuid": "507bef4c-9cc7-559c-9c6a-1fb55948d6eb", 00:24:02.379 "is_configured": true, 00:24:02.379 "data_offset": 256, 00:24:02.379 "data_size": 7936 00:24:02.379 } 00:24:02.379 ] 00:24:02.379 }' 00:24:02.379 13:44:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:02.638 13:44:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:24:02.638 13:44:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:02.638 13:44:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:24:02.638 13:44:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:24:02.638 13:44:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@652 -- # local es=0 00:24:02.638 13:44:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:24:02.638 13:44:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:24:02.638 13:44:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:02.638 13:44:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:24:02.638 13:44:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:02.638 13:44:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:24:02.638 13:44:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:02.638 13:44:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:02.638 [2024-11-20 13:44:01.942430] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:02.638 [2024-11-20 13:44:01.942721] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:24:02.638 [2024-11-20 13:44:01.942754] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:24:02.638 request: 00:24:02.638 { 00:24:02.638 "base_bdev": "BaseBdev1", 00:24:02.638 "raid_bdev": "raid_bdev1", 00:24:02.638 "method": "bdev_raid_add_base_bdev", 00:24:02.638 "req_id": 1 00:24:02.638 } 00:24:02.638 Got JSON-RPC error response 00:24:02.638 response: 00:24:02.638 { 00:24:02.638 "code": -22, 00:24:02.638 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:24:02.638 } 00:24:02.638 13:44:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:24:02.638 13:44:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@655 -- # es=1 00:24:02.638 13:44:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:02.638 13:44:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:02.638 13:44:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:02.638 13:44:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@779 -- # sleep 1 00:24:03.574 13:44:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:24:03.574 13:44:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:03.574 13:44:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:03.574 13:44:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:03.574 13:44:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:03.574 13:44:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:24:03.574 13:44:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:03.574 13:44:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:03.574 13:44:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:03.574 13:44:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:03.574 13:44:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:03.574 13:44:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:03.574 13:44:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:03.574 13:44:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:03.574 13:44:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:03.574 13:44:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:03.574 "name": "raid_bdev1", 00:24:03.574 "uuid": "cb50949b-6fc2-482f-994a-e51b07b6e6c8", 00:24:03.574 "strip_size_kb": 0, 00:24:03.574 "state": "online", 00:24:03.574 "raid_level": "raid1", 00:24:03.574 "superblock": true, 00:24:03.574 "num_base_bdevs": 2, 00:24:03.574 "num_base_bdevs_discovered": 1, 00:24:03.574 "num_base_bdevs_operational": 1, 00:24:03.574 "base_bdevs_list": [ 00:24:03.574 { 00:24:03.574 "name": null, 00:24:03.574 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:03.574 "is_configured": false, 00:24:03.574 "data_offset": 0, 00:24:03.574 "data_size": 7936 00:24:03.574 }, 00:24:03.574 { 00:24:03.574 "name": "BaseBdev2", 00:24:03.574 "uuid": "507bef4c-9cc7-559c-9c6a-1fb55948d6eb", 00:24:03.574 "is_configured": true, 00:24:03.574 "data_offset": 256, 00:24:03.574 "data_size": 7936 00:24:03.574 } 00:24:03.574 ] 00:24:03.574 }' 00:24:03.574 13:44:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:03.574 13:44:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:03.833 13:44:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:03.833 13:44:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:03.833 13:44:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:24:03.833 13:44:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:24:03.833 13:44:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:03.833 13:44:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:03.833 13:44:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:03.833 13:44:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:03.833 13:44:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:04.092 13:44:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:04.092 13:44:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:04.092 "name": "raid_bdev1", 00:24:04.092 "uuid": "cb50949b-6fc2-482f-994a-e51b07b6e6c8", 00:24:04.092 "strip_size_kb": 0, 00:24:04.092 "state": "online", 00:24:04.092 "raid_level": "raid1", 00:24:04.092 "superblock": true, 00:24:04.092 "num_base_bdevs": 2, 00:24:04.092 "num_base_bdevs_discovered": 1, 00:24:04.092 "num_base_bdevs_operational": 1, 00:24:04.092 "base_bdevs_list": [ 00:24:04.092 { 00:24:04.092 "name": null, 00:24:04.092 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:04.092 "is_configured": false, 00:24:04.092 "data_offset": 0, 00:24:04.092 "data_size": 7936 00:24:04.092 }, 00:24:04.092 { 00:24:04.092 "name": "BaseBdev2", 00:24:04.092 "uuid": "507bef4c-9cc7-559c-9c6a-1fb55948d6eb", 00:24:04.092 "is_configured": true, 00:24:04.092 "data_offset": 256, 00:24:04.092 "data_size": 7936 00:24:04.092 } 00:24:04.092 ] 00:24:04.092 }' 00:24:04.092 13:44:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:04.092 13:44:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:24:04.092 13:44:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:04.092 13:44:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:24:04.092 13:44:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@784 -- # killprocess 88835 00:24:04.092 13:44:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # '[' -z 88835 ']' 00:24:04.092 13:44:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # kill -0 88835 00:24:04.092 13:44:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # uname 00:24:04.092 13:44:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:04.092 13:44:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88835 00:24:04.092 killing process with pid 88835 00:24:04.092 Received shutdown signal, test time was about 60.000000 seconds 00:24:04.092 00:24:04.092 Latency(us) 00:24:04.092 [2024-11-20T13:44:03.577Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:04.092 [2024-11-20T13:44:03.577Z] =================================================================================================================== 00:24:04.092 [2024-11-20T13:44:03.577Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:04.092 13:44:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:04.092 13:44:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:04.092 13:44:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88835' 00:24:04.092 13:44:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@973 -- # kill 88835 00:24:04.092 13:44:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@978 -- # wait 88835 00:24:04.092 [2024-11-20 13:44:03.469288] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:24:04.092 [2024-11-20 13:44:03.469416] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:04.092 [2024-11-20 13:44:03.469464] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:04.092 [2024-11-20 13:44:03.469478] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:24:04.352 [2024-11-20 13:44:03.777293] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:24:05.730 13:44:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@786 -- # return 0 00:24:05.730 00:24:05.730 real 0m17.252s 00:24:05.730 user 0m22.428s 00:24:05.730 sys 0m1.743s 00:24:05.730 13:44:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:05.730 ************************************ 00:24:05.730 END TEST raid_rebuild_test_sb_md_interleaved 00:24:05.730 ************************************ 00:24:05.730 13:44:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:05.730 13:44:04 bdev_raid -- bdev/bdev_raid.sh@1015 -- # trap - EXIT 00:24:05.730 13:44:04 bdev_raid -- bdev/bdev_raid.sh@1016 -- # cleanup 00:24:05.730 13:44:04 bdev_raid -- bdev/bdev_raid.sh@56 -- # '[' -n 88835 ']' 00:24:05.730 13:44:04 bdev_raid -- bdev/bdev_raid.sh@56 -- # ps -p 88835 00:24:05.730 13:44:04 bdev_raid -- bdev/bdev_raid.sh@60 -- # rm -rf /raidtest 00:24:05.730 ************************************ 00:24:05.730 END TEST bdev_raid 00:24:05.730 ************************************ 00:24:05.730 00:24:05.730 real 12m2.510s 00:24:05.730 user 16m7.135s 00:24:05.730 sys 2m6.786s 00:24:05.730 13:44:04 bdev_raid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:05.730 13:44:04 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:24:05.730 13:44:05 -- spdk/autotest.sh@190 -- # run_test spdkcli_raid /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:24:05.730 13:44:05 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:24:05.730 13:44:05 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:05.730 13:44:05 -- common/autotest_common.sh@10 -- # set +x 00:24:05.731 ************************************ 00:24:05.731 START TEST spdkcli_raid 00:24:05.731 ************************************ 00:24:05.731 13:44:05 spdkcli_raid -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:24:05.731 * Looking for test storage... 00:24:05.731 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:24:05.731 13:44:05 spdkcli_raid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:05.731 13:44:05 spdkcli_raid -- common/autotest_common.sh@1693 -- # lcov --version 00:24:05.731 13:44:05 spdkcli_raid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:05.731 13:44:05 spdkcli_raid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:05.731 13:44:05 spdkcli_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:05.731 13:44:05 spdkcli_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:05.731 13:44:05 spdkcli_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:05.731 13:44:05 spdkcli_raid -- scripts/common.sh@336 -- # IFS=.-: 00:24:05.731 13:44:05 spdkcli_raid -- scripts/common.sh@336 -- # read -ra ver1 00:24:05.731 13:44:05 spdkcli_raid -- scripts/common.sh@337 -- # IFS=.-: 00:24:05.731 13:44:05 spdkcli_raid -- scripts/common.sh@337 -- # read -ra ver2 00:24:05.731 13:44:05 spdkcli_raid -- scripts/common.sh@338 -- # local 'op=<' 00:24:05.731 13:44:05 spdkcli_raid -- scripts/common.sh@340 -- # ver1_l=2 00:24:05.731 13:44:05 spdkcli_raid -- scripts/common.sh@341 -- # ver2_l=1 00:24:05.731 13:44:05 spdkcli_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:05.731 13:44:05 spdkcli_raid -- scripts/common.sh@344 -- # case "$op" in 00:24:05.731 13:44:05 spdkcli_raid -- scripts/common.sh@345 -- # : 1 00:24:05.731 13:44:05 spdkcli_raid -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:05.731 13:44:05 spdkcli_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:05.731 13:44:05 spdkcli_raid -- scripts/common.sh@365 -- # decimal 1 00:24:05.731 13:44:05 spdkcli_raid -- scripts/common.sh@353 -- # local d=1 00:24:05.731 13:44:05 spdkcli_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:05.731 13:44:05 spdkcli_raid -- scripts/common.sh@355 -- # echo 1 00:24:05.731 13:44:05 spdkcli_raid -- scripts/common.sh@365 -- # ver1[v]=1 00:24:05.731 13:44:05 spdkcli_raid -- scripts/common.sh@366 -- # decimal 2 00:24:05.731 13:44:05 spdkcli_raid -- scripts/common.sh@353 -- # local d=2 00:24:05.731 13:44:05 spdkcli_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:05.731 13:44:05 spdkcli_raid -- scripts/common.sh@355 -- # echo 2 00:24:05.731 13:44:05 spdkcli_raid -- scripts/common.sh@366 -- # ver2[v]=2 00:24:05.731 13:44:05 spdkcli_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:05.731 13:44:05 spdkcli_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:05.731 13:44:05 spdkcli_raid -- scripts/common.sh@368 -- # return 0 00:24:05.731 13:44:05 spdkcli_raid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:05.731 13:44:05 spdkcli_raid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:05.731 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:05.731 --rc genhtml_branch_coverage=1 00:24:05.731 --rc genhtml_function_coverage=1 00:24:05.731 --rc genhtml_legend=1 00:24:05.731 --rc geninfo_all_blocks=1 00:24:05.731 --rc geninfo_unexecuted_blocks=1 00:24:05.731 00:24:05.731 ' 00:24:05.731 13:44:05 spdkcli_raid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:05.731 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:05.731 --rc genhtml_branch_coverage=1 00:24:05.731 --rc genhtml_function_coverage=1 00:24:05.731 --rc genhtml_legend=1 00:24:05.731 --rc geninfo_all_blocks=1 00:24:05.731 --rc geninfo_unexecuted_blocks=1 00:24:05.731 00:24:05.731 ' 00:24:05.731 13:44:05 spdkcli_raid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:05.731 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:05.731 --rc genhtml_branch_coverage=1 00:24:05.731 --rc genhtml_function_coverage=1 00:24:05.731 --rc genhtml_legend=1 00:24:05.731 --rc geninfo_all_blocks=1 00:24:05.731 --rc geninfo_unexecuted_blocks=1 00:24:05.731 00:24:05.731 ' 00:24:05.731 13:44:05 spdkcli_raid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:05.731 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:05.731 --rc genhtml_branch_coverage=1 00:24:05.731 --rc genhtml_function_coverage=1 00:24:05.731 --rc genhtml_legend=1 00:24:05.731 --rc geninfo_all_blocks=1 00:24:05.731 --rc geninfo_unexecuted_blocks=1 00:24:05.731 00:24:05.731 ' 00:24:05.731 13:44:05 spdkcli_raid -- spdkcli/raid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:24:05.731 13:44:05 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:24:05.731 13:44:05 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:24:05.731 13:44:05 spdkcli_raid -- spdkcli/raid.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:24:05.731 13:44:05 spdkcli_raid -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:24:05.731 13:44:05 spdkcli_raid -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:24:05.731 13:44:05 spdkcli_raid -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:24:05.731 13:44:05 spdkcli_raid -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:24:05.731 13:44:05 spdkcli_raid -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:24:05.731 13:44:05 spdkcli_raid -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:24:05.731 13:44:05 spdkcli_raid -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:24:05.731 13:44:05 spdkcli_raid -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:24:05.731 13:44:05 spdkcli_raid -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:24:05.731 13:44:05 spdkcli_raid -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:24:05.731 13:44:05 spdkcli_raid -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:24:05.731 13:44:05 spdkcli_raid -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:24:05.731 13:44:05 spdkcli_raid -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:24:05.731 13:44:05 spdkcli_raid -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:24:05.731 13:44:05 spdkcli_raid -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:24:05.731 13:44:05 spdkcli_raid -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:24:05.731 13:44:05 spdkcli_raid -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:24:05.731 13:44:05 spdkcli_raid -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:24:05.731 13:44:05 spdkcli_raid -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:24:05.731 13:44:05 spdkcli_raid -- spdkcli/raid.sh@12 -- # MATCH_FILE=spdkcli_raid.test 00:24:05.731 13:44:05 spdkcli_raid -- spdkcli/raid.sh@13 -- # SPDKCLI_BRANCH=/bdevs 00:24:05.731 13:44:05 spdkcli_raid -- spdkcli/raid.sh@14 -- # dirname /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:24:05.731 13:44:05 spdkcli_raid -- spdkcli/raid.sh@14 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/spdkcli 00:24:05.990 13:44:05 spdkcli_raid -- spdkcli/raid.sh@14 -- # testdir=/home/vagrant/spdk_repo/spdk/test/spdkcli 00:24:05.990 13:44:05 spdkcli_raid -- spdkcli/raid.sh@15 -- # . /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:24:05.990 13:44:05 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:24:05.990 13:44:05 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:24:05.990 13:44:05 spdkcli_raid -- spdkcli/raid.sh@17 -- # trap cleanup EXIT 00:24:05.990 13:44:05 spdkcli_raid -- spdkcli/raid.sh@19 -- # timing_enter run_spdk_tgt 00:24:05.990 13:44:05 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:05.990 13:44:05 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:24:05.990 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:05.990 13:44:05 spdkcli_raid -- spdkcli/raid.sh@20 -- # run_spdk_tgt 00:24:05.990 13:44:05 spdkcli_raid -- spdkcli/common.sh@27 -- # spdk_tgt_pid=89508 00:24:05.990 13:44:05 spdkcli_raid -- spdkcli/common.sh@28 -- # waitforlisten 89508 00:24:05.990 13:44:05 spdkcli_raid -- common/autotest_common.sh@835 -- # '[' -z 89508 ']' 00:24:05.990 13:44:05 spdkcli_raid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:05.990 13:44:05 spdkcli_raid -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:05.990 13:44:05 spdkcli_raid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:05.990 13:44:05 spdkcli_raid -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:05.990 13:44:05 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:24:05.990 13:44:05 spdkcli_raid -- spdkcli/common.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:24:05.990 [2024-11-20 13:44:05.330757] Starting SPDK v25.01-pre git sha1 82b85d9ca / DPDK 24.03.0 initialization... 00:24:05.990 [2024-11-20 13:44:05.330872] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89508 ] 00:24:06.304 [2024-11-20 13:44:05.513470] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:24:06.304 [2024-11-20 13:44:05.636550] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:06.304 [2024-11-20 13:44:05.636583] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:07.239 13:44:06 spdkcli_raid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:07.239 13:44:06 spdkcli_raid -- common/autotest_common.sh@868 -- # return 0 00:24:07.239 13:44:06 spdkcli_raid -- spdkcli/raid.sh@21 -- # timing_exit run_spdk_tgt 00:24:07.239 13:44:06 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:07.239 13:44:06 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:24:07.239 13:44:06 spdkcli_raid -- spdkcli/raid.sh@23 -- # timing_enter spdkcli_create_malloc 00:24:07.239 13:44:06 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:07.239 13:44:06 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:24:07.239 13:44:06 spdkcli_raid -- spdkcli/raid.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 8 512 Malloc1'\'' '\''Malloc1'\'' True 00:24:07.239 '\''/bdevs/malloc create 8 512 Malloc2'\'' '\''Malloc2'\'' True 00:24:07.239 ' 00:24:09.140 Executing command: ['/bdevs/malloc create 8 512 Malloc1', 'Malloc1', True] 00:24:09.140 Executing command: ['/bdevs/malloc create 8 512 Malloc2', 'Malloc2', True] 00:24:09.140 13:44:08 spdkcli_raid -- spdkcli/raid.sh@27 -- # timing_exit spdkcli_create_malloc 00:24:09.140 13:44:08 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:09.140 13:44:08 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:24:09.140 13:44:08 spdkcli_raid -- spdkcli/raid.sh@29 -- # timing_enter spdkcli_create_raid 00:24:09.140 13:44:08 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:09.140 13:44:08 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:24:09.140 13:44:08 spdkcli_raid -- spdkcli/raid.sh@31 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4'\'' '\''testraid'\'' True 00:24:09.140 ' 00:24:10.076 Executing command: ['/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4', 'testraid', True] 00:24:10.076 13:44:09 spdkcli_raid -- spdkcli/raid.sh@32 -- # timing_exit spdkcli_create_raid 00:24:10.076 13:44:09 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:10.076 13:44:09 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:24:10.076 13:44:09 spdkcli_raid -- spdkcli/raid.sh@34 -- # timing_enter spdkcli_check_match 00:24:10.076 13:44:09 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:10.076 13:44:09 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:24:10.076 13:44:09 spdkcli_raid -- spdkcli/raid.sh@35 -- # check_match 00:24:10.076 13:44:09 spdkcli_raid -- spdkcli/common.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/spdkcli.py ll /bdevs 00:24:10.644 13:44:10 spdkcli_raid -- spdkcli/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/app/match/match /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test.match 00:24:10.644 13:44:10 spdkcli_raid -- spdkcli/common.sh@46 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test 00:24:10.644 13:44:10 spdkcli_raid -- spdkcli/raid.sh@36 -- # timing_exit spdkcli_check_match 00:24:10.644 13:44:10 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:10.644 13:44:10 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:24:10.644 13:44:10 spdkcli_raid -- spdkcli/raid.sh@38 -- # timing_enter spdkcli_delete_raid 00:24:10.644 13:44:10 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:10.644 13:44:10 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:24:10.903 13:44:10 spdkcli_raid -- spdkcli/raid.sh@40 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume delete testraid'\'' '\'''\'' True 00:24:10.903 ' 00:24:11.838 Executing command: ['/bdevs/raid_volume delete testraid', '', True] 00:24:11.838 13:44:11 spdkcli_raid -- spdkcli/raid.sh@41 -- # timing_exit spdkcli_delete_raid 00:24:11.839 13:44:11 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:11.839 13:44:11 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:24:11.839 13:44:11 spdkcli_raid -- spdkcli/raid.sh@43 -- # timing_enter spdkcli_delete_malloc 00:24:11.839 13:44:11 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:11.839 13:44:11 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:24:11.839 13:44:11 spdkcli_raid -- spdkcli/raid.sh@46 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc delete Malloc1'\'' '\'''\'' True 00:24:11.839 '\''/bdevs/malloc delete Malloc2'\'' '\'''\'' True 00:24:11.839 ' 00:24:13.213 Executing command: ['/bdevs/malloc delete Malloc1', '', True] 00:24:13.213 Executing command: ['/bdevs/malloc delete Malloc2', '', True] 00:24:13.471 13:44:12 spdkcli_raid -- spdkcli/raid.sh@47 -- # timing_exit spdkcli_delete_malloc 00:24:13.471 13:44:12 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:13.471 13:44:12 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:24:13.471 13:44:12 spdkcli_raid -- spdkcli/raid.sh@49 -- # killprocess 89508 00:24:13.471 13:44:12 spdkcli_raid -- common/autotest_common.sh@954 -- # '[' -z 89508 ']' 00:24:13.471 13:44:12 spdkcli_raid -- common/autotest_common.sh@958 -- # kill -0 89508 00:24:13.471 13:44:12 spdkcli_raid -- common/autotest_common.sh@959 -- # uname 00:24:13.471 13:44:12 spdkcli_raid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:13.471 13:44:12 spdkcli_raid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89508 00:24:13.471 killing process with pid 89508 00:24:13.471 13:44:12 spdkcli_raid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:13.471 13:44:12 spdkcli_raid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:13.471 13:44:12 spdkcli_raid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89508' 00:24:13.471 13:44:12 spdkcli_raid -- common/autotest_common.sh@973 -- # kill 89508 00:24:13.471 13:44:12 spdkcli_raid -- common/autotest_common.sh@978 -- # wait 89508 00:24:16.002 13:44:15 spdkcli_raid -- spdkcli/raid.sh@1 -- # cleanup 00:24:16.002 13:44:15 spdkcli_raid -- spdkcli/common.sh@10 -- # '[' -n 89508 ']' 00:24:16.002 13:44:15 spdkcli_raid -- spdkcli/common.sh@11 -- # killprocess 89508 00:24:16.002 13:44:15 spdkcli_raid -- common/autotest_common.sh@954 -- # '[' -z 89508 ']' 00:24:16.002 13:44:15 spdkcli_raid -- common/autotest_common.sh@958 -- # kill -0 89508 00:24:16.002 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (89508) - No such process 00:24:16.002 Process with pid 89508 is not found 00:24:16.002 13:44:15 spdkcli_raid -- common/autotest_common.sh@981 -- # echo 'Process with pid 89508 is not found' 00:24:16.002 13:44:15 spdkcli_raid -- spdkcli/common.sh@13 -- # '[' -n '' ']' 00:24:16.002 13:44:15 spdkcli_raid -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:24:16.002 13:44:15 spdkcli_raid -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:24:16.002 13:44:15 spdkcli_raid -- spdkcli/common.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_raid.test /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:24:16.002 ************************************ 00:24:16.002 END TEST spdkcli_raid 00:24:16.002 ************************************ 00:24:16.002 00:24:16.002 real 0m10.270s 00:24:16.002 user 0m21.156s 00:24:16.002 sys 0m1.204s 00:24:16.002 13:44:15 spdkcli_raid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:16.002 13:44:15 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:24:16.002 13:44:15 -- spdk/autotest.sh@191 -- # run_test blockdev_raid5f /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:24:16.002 13:44:15 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:16.002 13:44:15 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:16.002 13:44:15 -- common/autotest_common.sh@10 -- # set +x 00:24:16.002 ************************************ 00:24:16.002 START TEST blockdev_raid5f 00:24:16.002 ************************************ 00:24:16.002 13:44:15 blockdev_raid5f -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:24:16.002 * Looking for test storage... 00:24:16.002 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:24:16.002 13:44:15 blockdev_raid5f -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:16.002 13:44:15 blockdev_raid5f -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:16.002 13:44:15 blockdev_raid5f -- common/autotest_common.sh@1693 -- # lcov --version 00:24:16.261 13:44:15 blockdev_raid5f -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:16.261 13:44:15 blockdev_raid5f -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:16.261 13:44:15 blockdev_raid5f -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:16.261 13:44:15 blockdev_raid5f -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:16.261 13:44:15 blockdev_raid5f -- scripts/common.sh@336 -- # IFS=.-: 00:24:16.261 13:44:15 blockdev_raid5f -- scripts/common.sh@336 -- # read -ra ver1 00:24:16.261 13:44:15 blockdev_raid5f -- scripts/common.sh@337 -- # IFS=.-: 00:24:16.261 13:44:15 blockdev_raid5f -- scripts/common.sh@337 -- # read -ra ver2 00:24:16.261 13:44:15 blockdev_raid5f -- scripts/common.sh@338 -- # local 'op=<' 00:24:16.261 13:44:15 blockdev_raid5f -- scripts/common.sh@340 -- # ver1_l=2 00:24:16.261 13:44:15 blockdev_raid5f -- scripts/common.sh@341 -- # ver2_l=1 00:24:16.261 13:44:15 blockdev_raid5f -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:16.261 13:44:15 blockdev_raid5f -- scripts/common.sh@344 -- # case "$op" in 00:24:16.261 13:44:15 blockdev_raid5f -- scripts/common.sh@345 -- # : 1 00:24:16.261 13:44:15 blockdev_raid5f -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:16.261 13:44:15 blockdev_raid5f -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:16.261 13:44:15 blockdev_raid5f -- scripts/common.sh@365 -- # decimal 1 00:24:16.261 13:44:15 blockdev_raid5f -- scripts/common.sh@353 -- # local d=1 00:24:16.261 13:44:15 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:16.261 13:44:15 blockdev_raid5f -- scripts/common.sh@355 -- # echo 1 00:24:16.261 13:44:15 blockdev_raid5f -- scripts/common.sh@365 -- # ver1[v]=1 00:24:16.261 13:44:15 blockdev_raid5f -- scripts/common.sh@366 -- # decimal 2 00:24:16.261 13:44:15 blockdev_raid5f -- scripts/common.sh@353 -- # local d=2 00:24:16.261 13:44:15 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:16.261 13:44:15 blockdev_raid5f -- scripts/common.sh@355 -- # echo 2 00:24:16.261 13:44:15 blockdev_raid5f -- scripts/common.sh@366 -- # ver2[v]=2 00:24:16.261 13:44:15 blockdev_raid5f -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:16.261 13:44:15 blockdev_raid5f -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:16.261 13:44:15 blockdev_raid5f -- scripts/common.sh@368 -- # return 0 00:24:16.261 13:44:15 blockdev_raid5f -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:16.261 13:44:15 blockdev_raid5f -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:16.261 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:16.261 --rc genhtml_branch_coverage=1 00:24:16.261 --rc genhtml_function_coverage=1 00:24:16.261 --rc genhtml_legend=1 00:24:16.261 --rc geninfo_all_blocks=1 00:24:16.261 --rc geninfo_unexecuted_blocks=1 00:24:16.261 00:24:16.261 ' 00:24:16.261 13:44:15 blockdev_raid5f -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:16.261 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:16.261 --rc genhtml_branch_coverage=1 00:24:16.261 --rc genhtml_function_coverage=1 00:24:16.261 --rc genhtml_legend=1 00:24:16.261 --rc geninfo_all_blocks=1 00:24:16.261 --rc geninfo_unexecuted_blocks=1 00:24:16.261 00:24:16.261 ' 00:24:16.261 13:44:15 blockdev_raid5f -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:16.261 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:16.261 --rc genhtml_branch_coverage=1 00:24:16.261 --rc genhtml_function_coverage=1 00:24:16.261 --rc genhtml_legend=1 00:24:16.261 --rc geninfo_all_blocks=1 00:24:16.261 --rc geninfo_unexecuted_blocks=1 00:24:16.261 00:24:16.261 ' 00:24:16.261 13:44:15 blockdev_raid5f -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:16.261 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:16.261 --rc genhtml_branch_coverage=1 00:24:16.261 --rc genhtml_function_coverage=1 00:24:16.261 --rc genhtml_legend=1 00:24:16.261 --rc geninfo_all_blocks=1 00:24:16.261 --rc geninfo_unexecuted_blocks=1 00:24:16.261 00:24:16.261 ' 00:24:16.261 13:44:15 blockdev_raid5f -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:24:16.261 13:44:15 blockdev_raid5f -- bdev/nbd_common.sh@6 -- # set -e 00:24:16.262 13:44:15 blockdev_raid5f -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:24:16.262 13:44:15 blockdev_raid5f -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:24:16.262 13:44:15 blockdev_raid5f -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:24:16.262 13:44:15 blockdev_raid5f -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:24:16.262 13:44:15 blockdev_raid5f -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:24:16.262 13:44:15 blockdev_raid5f -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:24:16.262 13:44:15 blockdev_raid5f -- bdev/blockdev.sh@20 -- # : 00:24:16.262 13:44:15 blockdev_raid5f -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:24:16.262 13:44:15 blockdev_raid5f -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:24:16.262 13:44:15 blockdev_raid5f -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:24:16.262 13:44:15 blockdev_raid5f -- bdev/blockdev.sh@711 -- # uname -s 00:24:16.262 13:44:15 blockdev_raid5f -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:24:16.262 13:44:15 blockdev_raid5f -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:24:16.262 13:44:15 blockdev_raid5f -- bdev/blockdev.sh@719 -- # test_type=raid5f 00:24:16.262 13:44:15 blockdev_raid5f -- bdev/blockdev.sh@720 -- # crypto_device= 00:24:16.262 13:44:15 blockdev_raid5f -- bdev/blockdev.sh@721 -- # dek= 00:24:16.262 13:44:15 blockdev_raid5f -- bdev/blockdev.sh@722 -- # env_ctx= 00:24:16.262 13:44:15 blockdev_raid5f -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:24:16.262 13:44:15 blockdev_raid5f -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:24:16.262 13:44:15 blockdev_raid5f -- bdev/blockdev.sh@727 -- # [[ raid5f == bdev ]] 00:24:16.262 13:44:15 blockdev_raid5f -- bdev/blockdev.sh@727 -- # [[ raid5f == crypto_* ]] 00:24:16.262 13:44:15 blockdev_raid5f -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:24:16.262 13:44:15 blockdev_raid5f -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=89788 00:24:16.262 13:44:15 blockdev_raid5f -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:24:16.262 13:44:15 blockdev_raid5f -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:24:16.262 13:44:15 blockdev_raid5f -- bdev/blockdev.sh@49 -- # waitforlisten 89788 00:24:16.262 13:44:15 blockdev_raid5f -- common/autotest_common.sh@835 -- # '[' -z 89788 ']' 00:24:16.262 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:16.262 13:44:15 blockdev_raid5f -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:16.262 13:44:15 blockdev_raid5f -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:16.262 13:44:15 blockdev_raid5f -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:16.262 13:44:15 blockdev_raid5f -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:16.262 13:44:15 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:24:16.262 [2024-11-20 13:44:15.702739] Starting SPDK v25.01-pre git sha1 82b85d9ca / DPDK 24.03.0 initialization... 00:24:16.262 [2024-11-20 13:44:15.703081] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89788 ] 00:24:16.519 [2024-11-20 13:44:15.881198] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:16.519 [2024-11-20 13:44:15.998518] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:17.453 13:44:16 blockdev_raid5f -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:17.453 13:44:16 blockdev_raid5f -- common/autotest_common.sh@868 -- # return 0 00:24:17.453 13:44:16 blockdev_raid5f -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:24:17.453 13:44:16 blockdev_raid5f -- bdev/blockdev.sh@763 -- # setup_raid5f_conf 00:24:17.453 13:44:16 blockdev_raid5f -- bdev/blockdev.sh@279 -- # rpc_cmd 00:24:17.453 13:44:16 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:17.453 13:44:16 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:24:17.453 Malloc0 00:24:17.710 Malloc1 00:24:17.710 Malloc2 00:24:17.710 13:44:17 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:17.710 13:44:17 blockdev_raid5f -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:24:17.710 13:44:17 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:17.710 13:44:17 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:24:17.710 13:44:17 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:17.710 13:44:17 blockdev_raid5f -- bdev/blockdev.sh@777 -- # cat 00:24:17.710 13:44:17 blockdev_raid5f -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:24:17.710 13:44:17 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:17.710 13:44:17 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:24:17.710 13:44:17 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:17.711 13:44:17 blockdev_raid5f -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:24:17.711 13:44:17 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:17.711 13:44:17 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:24:17.711 13:44:17 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:17.711 13:44:17 blockdev_raid5f -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:24:17.711 13:44:17 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:17.711 13:44:17 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:24:17.711 13:44:17 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:17.711 13:44:17 blockdev_raid5f -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:24:17.711 13:44:17 blockdev_raid5f -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:24:17.711 13:44:17 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:17.711 13:44:17 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:24:17.711 13:44:17 blockdev_raid5f -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:24:17.711 13:44:17 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:17.711 13:44:17 blockdev_raid5f -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:24:17.711 13:44:17 blockdev_raid5f -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "9b2742f0-ddca-4cd6-abd2-e5ef74a7881e"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "9b2742f0-ddca-4cd6-abd2-e5ef74a7881e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "9b2742f0-ddca-4cd6-abd2-e5ef74a7881e",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "88cb99ae-a5ff-4e5e-92f7-0cbf96bf3ec5",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "749dbbee-c0a6-410a-912f-0efa6ff3d4a4",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "43168ee9-7f45-43bd-b47e-959cdf3d284a",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:24:17.711 13:44:17 blockdev_raid5f -- bdev/blockdev.sh@786 -- # jq -r .name 00:24:17.983 13:44:17 blockdev_raid5f -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:24:17.983 13:44:17 blockdev_raid5f -- bdev/blockdev.sh@789 -- # hello_world_bdev=raid5f 00:24:17.983 13:44:17 blockdev_raid5f -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:24:17.983 13:44:17 blockdev_raid5f -- bdev/blockdev.sh@791 -- # killprocess 89788 00:24:17.983 13:44:17 blockdev_raid5f -- common/autotest_common.sh@954 -- # '[' -z 89788 ']' 00:24:17.983 13:44:17 blockdev_raid5f -- common/autotest_common.sh@958 -- # kill -0 89788 00:24:17.983 13:44:17 blockdev_raid5f -- common/autotest_common.sh@959 -- # uname 00:24:17.983 13:44:17 blockdev_raid5f -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:17.983 13:44:17 blockdev_raid5f -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89788 00:24:17.983 killing process with pid 89788 00:24:17.983 13:44:17 blockdev_raid5f -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:17.983 13:44:17 blockdev_raid5f -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:17.983 13:44:17 blockdev_raid5f -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89788' 00:24:17.983 13:44:17 blockdev_raid5f -- common/autotest_common.sh@973 -- # kill 89788 00:24:17.983 13:44:17 blockdev_raid5f -- common/autotest_common.sh@978 -- # wait 89788 00:24:20.517 13:44:19 blockdev_raid5f -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:24:20.517 13:44:19 blockdev_raid5f -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:24:20.517 13:44:19 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:24:20.517 13:44:19 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:20.517 13:44:19 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:24:20.517 ************************************ 00:24:20.517 START TEST bdev_hello_world 00:24:20.517 ************************************ 00:24:20.517 13:44:19 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:24:20.777 [2024-11-20 13:44:20.090163] Starting SPDK v25.01-pre git sha1 82b85d9ca / DPDK 24.03.0 initialization... 00:24:20.777 [2024-11-20 13:44:20.090297] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89855 ] 00:24:21.036 [2024-11-20 13:44:20.271119] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:21.036 [2024-11-20 13:44:20.389277] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:21.605 [2024-11-20 13:44:20.913778] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:24:21.605 [2024-11-20 13:44:20.914044] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev raid5f 00:24:21.605 [2024-11-20 13:44:20.914086] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:24:21.605 [2024-11-20 13:44:20.914579] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:24:21.605 [2024-11-20 13:44:20.914714] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:24:21.605 [2024-11-20 13:44:20.914733] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:24:21.605 [2024-11-20 13:44:20.914788] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:24:21.605 00:24:21.605 [2024-11-20 13:44:20.914810] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:24:22.983 00:24:22.983 real 0m2.341s 00:24:22.983 user 0m1.958s 00:24:22.983 sys 0m0.261s 00:24:22.983 ************************************ 00:24:22.983 END TEST bdev_hello_world 00:24:22.983 13:44:22 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:22.983 13:44:22 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:24:22.983 ************************************ 00:24:22.983 13:44:22 blockdev_raid5f -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:24:22.983 13:44:22 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:22.983 13:44:22 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:22.983 13:44:22 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:24:22.983 ************************************ 00:24:22.983 START TEST bdev_bounds 00:24:22.983 ************************************ 00:24:22.983 13:44:22 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:24:22.983 Process bdevio pid: 89903 00:24:22.983 13:44:22 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=89903 00:24:22.983 13:44:22 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:24:22.983 13:44:22 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:24:22.983 13:44:22 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 89903' 00:24:22.983 13:44:22 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 89903 00:24:22.983 13:44:22 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 89903 ']' 00:24:22.983 13:44:22 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:22.983 13:44:22 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:22.983 13:44:22 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:22.983 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:22.983 13:44:22 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:22.984 13:44:22 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:24:23.242 [2024-11-20 13:44:22.504941] Starting SPDK v25.01-pre git sha1 82b85d9ca / DPDK 24.03.0 initialization... 00:24:23.242 [2024-11-20 13:44:22.505096] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89903 ] 00:24:23.242 [2024-11-20 13:44:22.686185] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:24:23.501 [2024-11-20 13:44:22.808265] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:23.501 [2024-11-20 13:44:22.808440] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:23.501 [2024-11-20 13:44:22.808471] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:24.073 13:44:23 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:24.073 13:44:23 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:24:24.074 13:44:23 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:24:24.074 I/O targets: 00:24:24.074 raid5f: 131072 blocks of 512 bytes (64 MiB) 00:24:24.074 00:24:24.074 00:24:24.074 CUnit - A unit testing framework for C - Version 2.1-3 00:24:24.074 http://cunit.sourceforge.net/ 00:24:24.074 00:24:24.074 00:24:24.074 Suite: bdevio tests on: raid5f 00:24:24.074 Test: blockdev write read block ...passed 00:24:24.074 Test: blockdev write zeroes read block ...passed 00:24:24.074 Test: blockdev write zeroes read no split ...passed 00:24:24.340 Test: blockdev write zeroes read split ...passed 00:24:24.340 Test: blockdev write zeroes read split partial ...passed 00:24:24.340 Test: blockdev reset ...passed 00:24:24.340 Test: blockdev write read 8 blocks ...passed 00:24:24.340 Test: blockdev write read size > 128k ...passed 00:24:24.340 Test: blockdev write read invalid size ...passed 00:24:24.340 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:24:24.340 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:24:24.340 Test: blockdev write read max offset ...passed 00:24:24.340 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:24:24.340 Test: blockdev writev readv 8 blocks ...passed 00:24:24.340 Test: blockdev writev readv 30 x 1block ...passed 00:24:24.340 Test: blockdev writev readv block ...passed 00:24:24.340 Test: blockdev writev readv size > 128k ...passed 00:24:24.340 Test: blockdev writev readv size > 128k in two iovs ...passed 00:24:24.340 Test: blockdev comparev and writev ...passed 00:24:24.340 Test: blockdev nvme passthru rw ...passed 00:24:24.340 Test: blockdev nvme passthru vendor specific ...passed 00:24:24.340 Test: blockdev nvme admin passthru ...passed 00:24:24.340 Test: blockdev copy ...passed 00:24:24.340 00:24:24.340 Run Summary: Type Total Ran Passed Failed Inactive 00:24:24.340 suites 1 1 n/a 0 0 00:24:24.340 tests 23 23 23 0 0 00:24:24.340 asserts 130 130 130 0 n/a 00:24:24.340 00:24:24.340 Elapsed time = 0.579 seconds 00:24:24.340 0 00:24:24.340 13:44:23 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 89903 00:24:24.340 13:44:23 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 89903 ']' 00:24:24.340 13:44:23 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 89903 00:24:24.340 13:44:23 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:24:24.340 13:44:23 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:24.340 13:44:23 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89903 00:24:24.340 13:44:23 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:24.340 13:44:23 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:24.340 13:44:23 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89903' 00:24:24.340 killing process with pid 89903 00:24:24.340 13:44:23 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@973 -- # kill 89903 00:24:24.340 13:44:23 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@978 -- # wait 89903 00:24:26.246 13:44:25 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:24:26.246 00:24:26.246 real 0m2.819s 00:24:26.246 user 0m7.040s 00:24:26.246 sys 0m0.406s 00:24:26.246 13:44:25 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:26.246 13:44:25 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:24:26.246 ************************************ 00:24:26.246 END TEST bdev_bounds 00:24:26.246 ************************************ 00:24:26.246 13:44:25 blockdev_raid5f -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:24:26.246 13:44:25 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:24:26.246 13:44:25 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:26.246 13:44:25 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:24:26.246 ************************************ 00:24:26.246 START TEST bdev_nbd 00:24:26.246 ************************************ 00:24:26.246 13:44:25 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:24:26.246 13:44:25 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:24:26.246 13:44:25 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:24:26.246 13:44:25 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:24:26.246 13:44:25 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:24:26.246 13:44:25 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('raid5f') 00:24:26.246 13:44:25 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:24:26.246 13:44:25 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=1 00:24:26.246 13:44:25 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:24:26.246 13:44:25 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:24:26.246 13:44:25 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:24:26.246 13:44:25 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=1 00:24:26.246 13:44:25 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0') 00:24:26.246 13:44:25 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:24:26.246 13:44:25 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('raid5f') 00:24:26.246 13:44:25 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:24:26.246 13:44:25 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=89962 00:24:26.246 13:44:25 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:24:26.246 13:44:25 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:24:26.246 13:44:25 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 89962 /var/tmp/spdk-nbd.sock 00:24:26.247 13:44:25 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 89962 ']' 00:24:26.247 13:44:25 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:24:26.247 13:44:25 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:26.247 13:44:25 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:24:26.247 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:24:26.247 13:44:25 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:26.247 13:44:25 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:24:26.247 [2024-11-20 13:44:25.414452] Starting SPDK v25.01-pre git sha1 82b85d9ca / DPDK 24.03.0 initialization... 00:24:26.247 [2024-11-20 13:44:25.414594] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:26.247 [2024-11-20 13:44:25.596631] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:26.247 [2024-11-20 13:44:25.711299] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:26.815 13:44:26 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:26.815 13:44:26 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:24:26.815 13:44:26 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock raid5f 00:24:26.815 13:44:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:24:26.815 13:44:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('raid5f') 00:24:26.815 13:44:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:24:26.815 13:44:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock raid5f 00:24:26.815 13:44:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:24:26.815 13:44:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('raid5f') 00:24:26.815 13:44:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:24:26.815 13:44:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:24:26.815 13:44:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:24:26.815 13:44:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:24:26.815 13:44:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:24:26.815 13:44:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f 00:24:27.073 13:44:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:24:27.073 13:44:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:24:27.073 13:44:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:24:27.073 13:44:26 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:24:27.073 13:44:26 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:24:27.073 13:44:26 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:24:27.074 13:44:26 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:24:27.074 13:44:26 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:24:27.074 13:44:26 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:24:27.074 13:44:26 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:24:27.074 13:44:26 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:24:27.074 13:44:26 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:27.074 1+0 records in 00:24:27.074 1+0 records out 00:24:27.074 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000404801 s, 10.1 MB/s 00:24:27.074 13:44:26 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:27.074 13:44:26 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:24:27.074 13:44:26 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:27.074 13:44:26 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:24:27.074 13:44:26 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:24:27.074 13:44:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:24:27.074 13:44:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:24:27.074 13:44:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:24:27.333 13:44:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:24:27.333 { 00:24:27.333 "nbd_device": "/dev/nbd0", 00:24:27.333 "bdev_name": "raid5f" 00:24:27.333 } 00:24:27.333 ]' 00:24:27.333 13:44:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:24:27.333 13:44:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:24:27.333 { 00:24:27.333 "nbd_device": "/dev/nbd0", 00:24:27.333 "bdev_name": "raid5f" 00:24:27.333 } 00:24:27.333 ]' 00:24:27.333 13:44:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:24:27.333 13:44:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:24:27.333 13:44:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:24:27.333 13:44:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:24:27.333 13:44:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:24:27.333 13:44:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:24:27.333 13:44:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:27.333 13:44:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:24:27.593 13:44:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:24:27.593 13:44:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:24:27.593 13:44:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:24:27.593 13:44:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:27.593 13:44:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:27.593 13:44:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:24:27.593 13:44:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:24:27.593 13:44:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:24:27.593 13:44:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:24:27.593 13:44:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:24:27.593 13:44:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:24:27.852 13:44:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:24:27.852 13:44:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:24:27.852 13:44:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:24:27.852 13:44:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:24:27.852 13:44:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:24:27.852 13:44:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:24:27.852 13:44:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:24:27.852 13:44:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:24:27.852 13:44:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:24:27.852 13:44:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:24:27.852 13:44:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:24:27.852 13:44:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:24:27.853 13:44:27 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:24:27.853 13:44:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:24:27.853 13:44:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('raid5f') 00:24:27.853 13:44:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:24:27.853 13:44:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0') 00:24:27.853 13:44:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:24:27.853 13:44:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:24:27.853 13:44:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:24:27.853 13:44:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('raid5f') 00:24:27.853 13:44:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:24:27.853 13:44:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:24:27.853 13:44:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:24:27.853 13:44:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:24:27.853 13:44:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:24:27.853 13:44:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:24:27.853 13:44:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f /dev/nbd0 00:24:28.112 /dev/nbd0 00:24:28.112 13:44:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:24:28.112 13:44:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:24:28.112 13:44:27 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:24:28.112 13:44:27 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:24:28.112 13:44:27 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:24:28.112 13:44:27 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:24:28.112 13:44:27 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:24:28.112 13:44:27 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:24:28.112 13:44:27 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:24:28.112 13:44:27 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:24:28.112 13:44:27 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:28.112 1+0 records in 00:24:28.112 1+0 records out 00:24:28.112 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000248616 s, 16.5 MB/s 00:24:28.112 13:44:27 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:28.112 13:44:27 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:24:28.112 13:44:27 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:28.112 13:44:27 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:24:28.112 13:44:27 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:24:28.112 13:44:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:28.112 13:44:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:24:28.112 13:44:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:24:28.112 13:44:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:24:28.112 13:44:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:24:28.372 13:44:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:24:28.372 { 00:24:28.372 "nbd_device": "/dev/nbd0", 00:24:28.372 "bdev_name": "raid5f" 00:24:28.372 } 00:24:28.372 ]' 00:24:28.372 13:44:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:24:28.372 { 00:24:28.372 "nbd_device": "/dev/nbd0", 00:24:28.372 "bdev_name": "raid5f" 00:24:28.372 } 00:24:28.372 ]' 00:24:28.372 13:44:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:24:28.631 13:44:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:24:28.631 13:44:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:24:28.631 13:44:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:24:28.631 13:44:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=1 00:24:28.631 13:44:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 1 00:24:28.631 13:44:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=1 00:24:28.631 13:44:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 1 -ne 1 ']' 00:24:28.631 13:44:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify /dev/nbd0 write 00:24:28.631 13:44:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:24:28.631 13:44:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:24:28.631 13:44:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:24:28.631 13:44:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:24:28.631 13:44:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:24:28.631 13:44:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:24:28.631 256+0 records in 00:24:28.631 256+0 records out 00:24:28.631 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0135163 s, 77.6 MB/s 00:24:28.631 13:44:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:24:28.631 13:44:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:24:28.631 256+0 records in 00:24:28.631 256+0 records out 00:24:28.631 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.034537 s, 30.4 MB/s 00:24:28.631 13:44:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify /dev/nbd0 verify 00:24:28.631 13:44:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:24:28.631 13:44:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:24:28.631 13:44:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:24:28.631 13:44:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:24:28.631 13:44:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:24:28.631 13:44:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:24:28.631 13:44:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:24:28.631 13:44:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:24:28.631 13:44:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:24:28.631 13:44:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:24:28.631 13:44:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:24:28.631 13:44:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:24:28.631 13:44:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:24:28.631 13:44:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:24:28.631 13:44:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:28.631 13:44:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:24:28.890 13:44:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:24:28.890 13:44:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:24:28.890 13:44:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:24:28.890 13:44:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:28.890 13:44:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:28.890 13:44:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:24:28.890 13:44:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:24:28.890 13:44:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:24:28.890 13:44:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:24:28.890 13:44:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:24:28.890 13:44:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:24:29.149 13:44:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:24:29.149 13:44:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:24:29.149 13:44:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:24:29.149 13:44:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:24:29.149 13:44:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:24:29.150 13:44:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:24:29.150 13:44:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:24:29.150 13:44:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:24:29.150 13:44:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:24:29.150 13:44:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:24:29.150 13:44:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:24:29.150 13:44:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:24:29.150 13:44:28 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:24:29.150 13:44:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:24:29.150 13:44:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:24:29.150 13:44:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:24:29.409 malloc_lvol_verify 00:24:29.409 13:44:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:24:29.668 9f8be4d4-44c9-4861-87ef-86392b980352 00:24:29.668 13:44:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:24:29.668 6abf4c7b-826a-4ca1-8bf9-6ef506f0022e 00:24:29.668 13:44:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:24:29.939 /dev/nbd0 00:24:29.939 13:44:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:24:29.939 13:44:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:24:29.939 13:44:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:24:29.939 13:44:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:24:29.939 13:44:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:24:29.939 mke2fs 1.47.0 (5-Feb-2023) 00:24:29.939 Discarding device blocks: 0/4096 done 00:24:29.939 Creating filesystem with 4096 1k blocks and 1024 inodes 00:24:29.939 00:24:29.939 Allocating group tables: 0/1 done 00:24:29.939 Writing inode tables: 0/1 done 00:24:29.939 Creating journal (1024 blocks): done 00:24:29.939 Writing superblocks and filesystem accounting information: 0/1 done 00:24:29.939 00:24:29.939 13:44:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:24:29.939 13:44:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:24:29.939 13:44:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:24:29.939 13:44:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:24:29.939 13:44:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:24:29.939 13:44:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:29.939 13:44:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:24:30.199 13:44:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:24:30.199 13:44:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:24:30.199 13:44:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:24:30.199 13:44:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:30.199 13:44:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:30.199 13:44:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:24:30.199 13:44:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:24:30.199 13:44:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:24:30.199 13:44:29 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 89962 00:24:30.199 13:44:29 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 89962 ']' 00:24:30.199 13:44:29 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 89962 00:24:30.199 13:44:29 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:24:30.458 13:44:29 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:30.458 13:44:29 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89962 00:24:30.458 13:44:29 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:30.458 13:44:29 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:30.458 killing process with pid 89962 00:24:30.458 13:44:29 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89962' 00:24:30.458 13:44:29 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@973 -- # kill 89962 00:24:30.458 13:44:29 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@978 -- # wait 89962 00:24:31.834 13:44:31 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:24:31.835 00:24:31.835 real 0m5.929s 00:24:31.835 user 0m7.899s 00:24:31.835 sys 0m1.521s 00:24:31.835 13:44:31 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:31.835 13:44:31 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:24:31.835 ************************************ 00:24:31.835 END TEST bdev_nbd 00:24:31.835 ************************************ 00:24:31.835 13:44:31 blockdev_raid5f -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:24:31.835 13:44:31 blockdev_raid5f -- bdev/blockdev.sh@801 -- # '[' raid5f = nvme ']' 00:24:31.835 13:44:31 blockdev_raid5f -- bdev/blockdev.sh@801 -- # '[' raid5f = gpt ']' 00:24:31.835 13:44:31 blockdev_raid5f -- bdev/blockdev.sh@805 -- # run_test bdev_fio fio_test_suite '' 00:24:31.835 13:44:31 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:31.835 13:44:31 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:31.835 13:44:31 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:24:31.835 ************************************ 00:24:31.835 START TEST bdev_fio 00:24:31.835 ************************************ 00:24:31.835 13:44:31 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1129 -- # fio_test_suite '' 00:24:31.835 13:44:31 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:24:31.835 13:44:31 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:24:31.835 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:24:31.835 13:44:31 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:24:31.835 13:44:31 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:24:31.835 13:44:31 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:24:31.835 13:44:31 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:24:31.835 13:44:31 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:24:31.835 13:44:31 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:24:31.835 13:44:31 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=verify 00:24:31.835 13:44:31 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type=AIO 00:24:31.835 13:44:31 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:24:31.835 13:44:31 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:24:31.835 13:44:31 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:24:31.835 13:44:31 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z verify ']' 00:24:31.835 13:44:31 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:24:32.094 13:44:31 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:24:32.094 13:44:31 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:24:32.094 13:44:31 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1317 -- # '[' verify == verify ']' 00:24:32.094 13:44:31 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1318 -- # cat 00:24:32.094 13:44:31 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1327 -- # '[' AIO == AIO ']' 00:24:32.094 13:44:31 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1328 -- # /usr/src/fio/fio --version 00:24:32.094 13:44:31 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1328 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:24:32.094 13:44:31 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1329 -- # echo serialize_overlap=1 00:24:32.094 13:44:31 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:24:32.094 13:44:31 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_raid5f]' 00:24:32.094 13:44:31 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=raid5f 00:24:32.094 13:44:31 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:24:32.094 13:44:31 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:24:32.094 13:44:31 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1105 -- # '[' 11 -le 1 ']' 00:24:32.094 13:44:31 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:32.094 13:44:31 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:24:32.094 ************************************ 00:24:32.094 START TEST bdev_fio_rw_verify 00:24:32.094 ************************************ 00:24:32.094 13:44:31 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1129 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:24:32.094 13:44:31 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:24:32.094 13:44:31 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:24:32.094 13:44:31 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:32.094 13:44:31 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local sanitizers 00:24:32.094 13:44:31 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:32.094 13:44:31 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # shift 00:24:32.094 13:44:31 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # local asan_lib= 00:24:32.094 13:44:31 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:24:32.094 13:44:31 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:32.094 13:44:31 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # grep libasan 00:24:32.094 13:44:31 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:24:32.094 13:44:31 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:24:32.094 13:44:31 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:24:32.094 13:44:31 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1351 -- # break 00:24:32.094 13:44:31 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:24:32.094 13:44:31 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:24:32.353 job_raid5f: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:24:32.353 fio-3.35 00:24:32.353 Starting 1 thread 00:24:44.580 00:24:44.580 job_raid5f: (groupid=0, jobs=1): err= 0: pid=90164: Wed Nov 20 13:44:42 2024 00:24:44.580 read: IOPS=10.5k, BW=40.9MiB/s (42.9MB/s)(409MiB/10001msec) 00:24:44.580 slat (usec): min=19, max=171, avg=23.08, stdev= 3.31 00:24:44.580 clat (usec): min=10, max=694, avg=152.03, stdev=56.20 00:24:44.580 lat (usec): min=32, max=777, avg=175.12, stdev=56.80 00:24:44.580 clat percentiles (usec): 00:24:44.580 | 50.000th=[ 151], 99.000th=[ 265], 99.900th=[ 343], 99.990th=[ 465], 00:24:44.580 | 99.999th=[ 660] 00:24:44.580 write: IOPS=11.0k, BW=43.0MiB/s (45.1MB/s)(425MiB/9870msec); 0 zone resets 00:24:44.580 slat (usec): min=8, max=272, avg=19.43, stdev= 4.33 00:24:44.580 clat (usec): min=63, max=727, avg=346.85, stdev=48.15 00:24:44.580 lat (usec): min=80, max=766, avg=366.27, stdev=49.21 00:24:44.580 clat percentiles (usec): 00:24:44.580 | 50.000th=[ 351], 99.000th=[ 453], 99.900th=[ 529], 99.990th=[ 635], 00:24:44.580 | 99.999th=[ 693] 00:24:44.580 bw ( KiB/s): min=39176, max=48502, per=98.41%, avg=43380.11, stdev=2956.87, samples=19 00:24:44.580 iops : min= 9794, max=12125, avg=10845.00, stdev=739.17, samples=19 00:24:44.580 lat (usec) : 20=0.01%, 50=0.01%, 100=11.37%, 250=37.11%, 500=51.42% 00:24:44.580 lat (usec) : 750=0.09% 00:24:44.580 cpu : usr=98.61%, sys=0.59%, ctx=30, majf=0, minf=8803 00:24:44.580 IO depths : 1=7.7%, 2=20.0%, 4=55.1%, 8=17.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:44.580 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:44.580 complete : 0=0.0%, 4=90.0%, 8=10.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:44.580 issued rwts: total=104710,108770,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:44.580 latency : target=0, window=0, percentile=100.00%, depth=8 00:24:44.580 00:24:44.580 Run status group 0 (all jobs): 00:24:44.580 READ: bw=40.9MiB/s (42.9MB/s), 40.9MiB/s-40.9MiB/s (42.9MB/s-42.9MB/s), io=409MiB (429MB), run=10001-10001msec 00:24:44.580 WRITE: bw=43.0MiB/s (45.1MB/s), 43.0MiB/s-43.0MiB/s (45.1MB/s-45.1MB/s), io=425MiB (446MB), run=9870-9870msec 00:24:44.838 ----------------------------------------------------- 00:24:44.838 Suppressions used: 00:24:44.838 count bytes template 00:24:44.838 1 7 /usr/src/fio/parse.c 00:24:44.838 790 75840 /usr/src/fio/iolog.c 00:24:44.838 1 8 libtcmalloc_minimal.so 00:24:44.838 1 904 libcrypto.so 00:24:44.838 ----------------------------------------------------- 00:24:44.838 00:24:44.838 00:24:44.838 real 0m12.880s 00:24:44.838 user 0m13.243s 00:24:44.838 sys 0m0.857s 00:24:44.838 13:44:44 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:44.838 13:44:44 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:24:44.838 ************************************ 00:24:44.838 END TEST bdev_fio_rw_verify 00:24:44.838 ************************************ 00:24:45.096 13:44:44 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:24:45.096 13:44:44 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:24:45.096 13:44:44 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:24:45.096 13:44:44 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:24:45.096 13:44:44 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=trim 00:24:45.096 13:44:44 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type= 00:24:45.096 13:44:44 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:24:45.096 13:44:44 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:24:45.096 13:44:44 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:24:45.096 13:44:44 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z trim ']' 00:24:45.096 13:44:44 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:24:45.096 13:44:44 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:24:45.096 13:44:44 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:24:45.096 13:44:44 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1317 -- # '[' trim == verify ']' 00:24:45.096 13:44:44 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1332 -- # '[' trim == trim ']' 00:24:45.096 13:44:44 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1333 -- # echo rw=trimwrite 00:24:45.096 13:44:44 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:24:45.096 13:44:44 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "9b2742f0-ddca-4cd6-abd2-e5ef74a7881e"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "9b2742f0-ddca-4cd6-abd2-e5ef74a7881e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "9b2742f0-ddca-4cd6-abd2-e5ef74a7881e",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "88cb99ae-a5ff-4e5e-92f7-0cbf96bf3ec5",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "749dbbee-c0a6-410a-912f-0efa6ff3d4a4",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "43168ee9-7f45-43bd-b47e-959cdf3d284a",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:24:45.096 13:44:44 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:24:45.096 13:44:44 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:24:45.096 13:44:44 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:24:45.096 /home/vagrant/spdk_repo/spdk 00:24:45.096 13:44:44 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:24:45.096 13:44:44 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:24:45.096 00:24:45.096 real 0m13.144s 00:24:45.096 user 0m13.356s 00:24:45.096 sys 0m0.983s 00:24:45.096 13:44:44 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:45.096 13:44:44 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:24:45.096 ************************************ 00:24:45.096 END TEST bdev_fio 00:24:45.096 ************************************ 00:24:45.096 13:44:44 blockdev_raid5f -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:24:45.096 13:44:44 blockdev_raid5f -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:24:45.096 13:44:44 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:24:45.096 13:44:44 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:45.096 13:44:44 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:24:45.096 ************************************ 00:24:45.096 START TEST bdev_verify 00:24:45.096 ************************************ 00:24:45.096 13:44:44 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:24:45.354 [2024-11-20 13:44:44.618762] Starting SPDK v25.01-pre git sha1 82b85d9ca / DPDK 24.03.0 initialization... 00:24:45.354 [2024-11-20 13:44:44.618898] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90328 ] 00:24:45.354 [2024-11-20 13:44:44.789971] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:24:45.613 [2024-11-20 13:44:44.902008] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:45.613 [2024-11-20 13:44:44.902038] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:46.181 Running I/O for 5 seconds... 00:24:48.053 13322.00 IOPS, 52.04 MiB/s [2024-11-20T13:44:48.474Z] 14006.00 IOPS, 54.71 MiB/s [2024-11-20T13:44:49.850Z] 14158.67 IOPS, 55.31 MiB/s [2024-11-20T13:44:50.787Z] 14206.75 IOPS, 55.50 MiB/s [2024-11-20T13:44:50.787Z] 14195.00 IOPS, 55.45 MiB/s 00:24:51.302 Latency(us) 00:24:51.302 [2024-11-20T13:44:50.787Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:51.302 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:24:51.302 Verification LBA range: start 0x0 length 0x2000 00:24:51.302 raid5f : 5.01 6997.20 27.33 0.00 0.00 27439.52 210.56 23477.15 00:24:51.302 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:24:51.302 Verification LBA range: start 0x2000 length 0x2000 00:24:51.302 raid5f : 5.02 7185.11 28.07 0.00 0.00 26689.94 98.70 23161.32 00:24:51.302 [2024-11-20T13:44:50.787Z] =================================================================================================================== 00:24:51.302 [2024-11-20T13:44:50.787Z] Total : 14182.31 55.40 0.00 0.00 27059.41 98.70 23477.15 00:24:52.681 00:24:52.681 real 0m7.373s 00:24:52.681 user 0m13.625s 00:24:52.681 sys 0m0.281s 00:24:52.681 13:44:51 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:52.681 13:44:51 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:24:52.681 ************************************ 00:24:52.681 END TEST bdev_verify 00:24:52.681 ************************************ 00:24:52.681 13:44:51 blockdev_raid5f -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:24:52.681 13:44:51 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:24:52.681 13:44:51 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:52.681 13:44:51 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:24:52.681 ************************************ 00:24:52.681 START TEST bdev_verify_big_io 00:24:52.681 ************************************ 00:24:52.681 13:44:51 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:24:52.681 [2024-11-20 13:44:52.062710] Starting SPDK v25.01-pre git sha1 82b85d9ca / DPDK 24.03.0 initialization... 00:24:52.681 [2024-11-20 13:44:52.062843] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90425 ] 00:24:52.940 [2024-11-20 13:44:52.242845] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:24:52.940 [2024-11-20 13:44:52.363930] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:52.940 [2024-11-20 13:44:52.363962] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:53.508 Running I/O for 5 seconds... 00:24:55.457 630.00 IOPS, 39.38 MiB/s [2024-11-20T13:44:56.322Z] 760.00 IOPS, 47.50 MiB/s [2024-11-20T13:44:57.259Z] 761.33 IOPS, 47.58 MiB/s [2024-11-20T13:44:58.196Z] 761.50 IOPS, 47.59 MiB/s [2024-11-20T13:44:58.196Z] 761.60 IOPS, 47.60 MiB/s 00:24:58.711 Latency(us) 00:24:58.711 [2024-11-20T13:44:58.196Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:58.711 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:24:58.711 Verification LBA range: start 0x0 length 0x200 00:24:58.711 raid5f : 5.15 394.73 24.67 0.00 0.00 8021620.83 160.39 370581.08 00:24:58.711 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:24:58.711 Verification LBA range: start 0x200 length 0x200 00:24:58.711 raid5f : 5.13 396.41 24.78 0.00 0.00 7967529.52 210.56 357105.40 00:24:58.711 [2024-11-20T13:44:58.196Z] =================================================================================================================== 00:24:58.711 [2024-11-20T13:44:58.196Z] Total : 791.13 49.45 0.00 0.00 7994575.18 160.39 370581.08 00:25:00.088 00:25:00.088 real 0m7.591s 00:25:00.088 user 0m14.008s 00:25:00.088 sys 0m0.291s 00:25:00.088 13:44:59 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:00.088 ************************************ 00:25:00.088 END TEST bdev_verify_big_io 00:25:00.088 ************************************ 00:25:00.088 13:44:59 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:25:00.348 13:44:59 blockdev_raid5f -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:25:00.348 13:44:59 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:25:00.348 13:44:59 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:00.348 13:44:59 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:25:00.348 ************************************ 00:25:00.348 START TEST bdev_write_zeroes 00:25:00.348 ************************************ 00:25:00.348 13:44:59 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:25:00.348 [2024-11-20 13:44:59.719003] Starting SPDK v25.01-pre git sha1 82b85d9ca / DPDK 24.03.0 initialization... 00:25:00.348 [2024-11-20 13:44:59.719144] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90526 ] 00:25:00.606 [2024-11-20 13:44:59.903574] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:00.606 [2024-11-20 13:45:00.040314] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:01.174 Running I/O for 1 seconds... 00:25:02.153 23559.00 IOPS, 92.03 MiB/s 00:25:02.153 Latency(us) 00:25:02.153 [2024-11-20T13:45:01.638Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:02.153 Job: raid5f (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:25:02.153 raid5f : 1.01 23535.29 91.93 0.00 0.00 5419.69 1671.30 7369.51 00:25:02.153 [2024-11-20T13:45:01.638Z] =================================================================================================================== 00:25:02.153 [2024-11-20T13:45:01.638Z] Total : 23535.29 91.93 0.00 0.00 5419.69 1671.30 7369.51 00:25:04.058 00:25:04.058 real 0m3.493s 00:25:04.058 user 0m3.103s 00:25:04.058 sys 0m0.257s 00:25:04.058 13:45:03 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:04.058 13:45:03 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:25:04.058 ************************************ 00:25:04.058 END TEST bdev_write_zeroes 00:25:04.058 ************************************ 00:25:04.058 13:45:03 blockdev_raid5f -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:25:04.058 13:45:03 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:25:04.058 13:45:03 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:04.058 13:45:03 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:25:04.058 ************************************ 00:25:04.058 START TEST bdev_json_nonenclosed 00:25:04.058 ************************************ 00:25:04.058 13:45:03 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:25:04.058 [2024-11-20 13:45:03.284426] Starting SPDK v25.01-pre git sha1 82b85d9ca / DPDK 24.03.0 initialization... 00:25:04.058 [2024-11-20 13:45:03.284553] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90579 ] 00:25:04.058 [2024-11-20 13:45:03.466215] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:04.316 [2024-11-20 13:45:03.579830] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:04.316 [2024-11-20 13:45:03.579927] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:25:04.316 [2024-11-20 13:45:03.579958] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:25:04.316 [2024-11-20 13:45:03.579987] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:25:04.574 00:25:04.575 real 0m0.655s 00:25:04.575 user 0m0.402s 00:25:04.575 sys 0m0.148s 00:25:04.575 13:45:03 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:04.575 13:45:03 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:25:04.575 ************************************ 00:25:04.575 END TEST bdev_json_nonenclosed 00:25:04.575 ************************************ 00:25:04.575 13:45:03 blockdev_raid5f -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:25:04.575 13:45:03 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:25:04.575 13:45:03 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:04.575 13:45:03 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:25:04.575 ************************************ 00:25:04.575 START TEST bdev_json_nonarray 00:25:04.575 ************************************ 00:25:04.575 13:45:03 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:25:04.575 [2024-11-20 13:45:04.005810] Starting SPDK v25.01-pre git sha1 82b85d9ca / DPDK 24.03.0 initialization... 00:25:04.575 [2024-11-20 13:45:04.005937] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90610 ] 00:25:04.833 [2024-11-20 13:45:04.181618] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:05.092 [2024-11-20 13:45:04.352299] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:05.092 [2024-11-20 13:45:04.352409] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:25:05.092 [2024-11-20 13:45:04.352432] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:25:05.092 [2024-11-20 13:45:04.352454] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:25:05.351 00:25:05.351 real 0m0.696s 00:25:05.351 user 0m0.445s 00:25:05.351 sys 0m0.146s 00:25:05.351 13:45:04 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:05.351 13:45:04 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:25:05.351 ************************************ 00:25:05.351 END TEST bdev_json_nonarray 00:25:05.351 ************************************ 00:25:05.351 13:45:04 blockdev_raid5f -- bdev/blockdev.sh@824 -- # [[ raid5f == bdev ]] 00:25:05.351 13:45:04 blockdev_raid5f -- bdev/blockdev.sh@832 -- # [[ raid5f == gpt ]] 00:25:05.351 13:45:04 blockdev_raid5f -- bdev/blockdev.sh@836 -- # [[ raid5f == crypto_sw ]] 00:25:05.351 13:45:04 blockdev_raid5f -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:25:05.351 13:45:04 blockdev_raid5f -- bdev/blockdev.sh@849 -- # cleanup 00:25:05.351 13:45:04 blockdev_raid5f -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:25:05.351 13:45:04 blockdev_raid5f -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:25:05.351 13:45:04 blockdev_raid5f -- bdev/blockdev.sh@26 -- # [[ raid5f == rbd ]] 00:25:05.351 13:45:04 blockdev_raid5f -- bdev/blockdev.sh@30 -- # [[ raid5f == daos ]] 00:25:05.351 13:45:04 blockdev_raid5f -- bdev/blockdev.sh@34 -- # [[ raid5f = \g\p\t ]] 00:25:05.351 13:45:04 blockdev_raid5f -- bdev/blockdev.sh@40 -- # [[ raid5f == xnvme ]] 00:25:05.351 00:25:05.351 real 0m49.331s 00:25:05.351 user 1m6.577s 00:25:05.351 sys 0m5.404s 00:25:05.351 13:45:04 blockdev_raid5f -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:05.351 13:45:04 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:25:05.351 ************************************ 00:25:05.351 END TEST blockdev_raid5f 00:25:05.351 ************************************ 00:25:05.351 13:45:04 -- spdk/autotest.sh@194 -- # uname -s 00:25:05.351 13:45:04 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:25:05.351 13:45:04 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:25:05.351 13:45:04 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:25:05.351 13:45:04 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:25:05.351 13:45:04 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:25:05.351 13:45:04 -- spdk/autotest.sh@260 -- # timing_exit lib 00:25:05.351 13:45:04 -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:05.351 13:45:04 -- common/autotest_common.sh@10 -- # set +x 00:25:05.351 13:45:04 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:25:05.351 13:45:04 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:25:05.351 13:45:04 -- spdk/autotest.sh@276 -- # '[' 0 -eq 1 ']' 00:25:05.352 13:45:04 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:25:05.352 13:45:04 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:25:05.352 13:45:04 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:25:05.352 13:45:04 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:25:05.352 13:45:04 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:25:05.352 13:45:04 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:25:05.352 13:45:04 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:25:05.352 13:45:04 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:25:05.352 13:45:04 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:25:05.352 13:45:04 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:25:05.352 13:45:04 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:25:05.352 13:45:04 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:25:05.352 13:45:04 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:25:05.352 13:45:04 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:25:05.352 13:45:04 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:25:05.352 13:45:04 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:25:05.352 13:45:04 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:25:05.352 13:45:04 -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:05.352 13:45:04 -- common/autotest_common.sh@10 -- # set +x 00:25:05.352 13:45:04 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:25:05.352 13:45:04 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:25:05.352 13:45:04 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:25:05.352 13:45:04 -- common/autotest_common.sh@10 -- # set +x 00:25:07.882 INFO: APP EXITING 00:25:07.882 INFO: killing all VMs 00:25:07.882 INFO: killing vhost app 00:25:07.882 INFO: EXIT DONE 00:25:08.140 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:25:08.140 Waiting for block devices as requested 00:25:08.140 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:25:08.400 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:25:09.336 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:25:09.336 Cleaning 00:25:09.336 Removing: /var/run/dpdk/spdk0/config 00:25:09.336 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:25:09.336 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:25:09.336 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:25:09.336 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:25:09.336 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:25:09.336 Removing: /var/run/dpdk/spdk0/hugepage_info 00:25:09.336 Removing: /dev/shm/spdk_tgt_trace.pid56670 00:25:09.336 Removing: /var/run/dpdk/spdk0 00:25:09.336 Removing: /var/run/dpdk/spdk_pid56435 00:25:09.336 Removing: /var/run/dpdk/spdk_pid56670 00:25:09.336 Removing: /var/run/dpdk/spdk_pid56905 00:25:09.336 Removing: /var/run/dpdk/spdk_pid57009 00:25:09.336 Removing: /var/run/dpdk/spdk_pid57065 00:25:09.336 Removing: /var/run/dpdk/spdk_pid57204 00:25:09.336 Removing: /var/run/dpdk/spdk_pid57222 00:25:09.336 Removing: /var/run/dpdk/spdk_pid57432 00:25:09.336 Removing: /var/run/dpdk/spdk_pid57538 00:25:09.336 Removing: /var/run/dpdk/spdk_pid57651 00:25:09.336 Removing: /var/run/dpdk/spdk_pid57778 00:25:09.336 Removing: /var/run/dpdk/spdk_pid57886 00:25:09.336 Removing: /var/run/dpdk/spdk_pid57926 00:25:09.336 Removing: /var/run/dpdk/spdk_pid57962 00:25:09.336 Removing: /var/run/dpdk/spdk_pid58038 00:25:09.336 Removing: /var/run/dpdk/spdk_pid58150 00:25:09.336 Removing: /var/run/dpdk/spdk_pid58597 00:25:09.336 Removing: /var/run/dpdk/spdk_pid58678 00:25:09.336 Removing: /var/run/dpdk/spdk_pid58752 00:25:09.336 Removing: /var/run/dpdk/spdk_pid58773 00:25:09.336 Removing: /var/run/dpdk/spdk_pid58930 00:25:09.336 Removing: /var/run/dpdk/spdk_pid58946 00:25:09.336 Removing: /var/run/dpdk/spdk_pid59107 00:25:09.336 Removing: /var/run/dpdk/spdk_pid59123 00:25:09.336 Removing: /var/run/dpdk/spdk_pid59193 00:25:09.336 Removing: /var/run/dpdk/spdk_pid59216 00:25:09.336 Removing: /var/run/dpdk/spdk_pid59280 00:25:09.336 Removing: /var/run/dpdk/spdk_pid59298 00:25:09.336 Removing: /var/run/dpdk/spdk_pid59499 00:25:09.336 Removing: /var/run/dpdk/spdk_pid59541 00:25:09.336 Removing: /var/run/dpdk/spdk_pid59634 00:25:09.336 Removing: /var/run/dpdk/spdk_pid60990 00:25:09.336 Removing: /var/run/dpdk/spdk_pid61202 00:25:09.336 Removing: /var/run/dpdk/spdk_pid61347 00:25:09.336 Removing: /var/run/dpdk/spdk_pid61985 00:25:09.336 Removing: /var/run/dpdk/spdk_pid62202 00:25:09.336 Removing: /var/run/dpdk/spdk_pid62342 00:25:09.336 Removing: /var/run/dpdk/spdk_pid62980 00:25:09.336 Removing: /var/run/dpdk/spdk_pid63309 00:25:09.336 Removing: /var/run/dpdk/spdk_pid63449 00:25:09.336 Removing: /var/run/dpdk/spdk_pid64824 00:25:09.336 Removing: /var/run/dpdk/spdk_pid65072 00:25:09.596 Removing: /var/run/dpdk/spdk_pid65224 00:25:09.596 Removing: /var/run/dpdk/spdk_pid66611 00:25:09.596 Removing: /var/run/dpdk/spdk_pid66870 00:25:09.596 Removing: /var/run/dpdk/spdk_pid67011 00:25:09.596 Removing: /var/run/dpdk/spdk_pid68404 00:25:09.596 Removing: /var/run/dpdk/spdk_pid68844 00:25:09.596 Removing: /var/run/dpdk/spdk_pid68995 00:25:09.596 Removing: /var/run/dpdk/spdk_pid70480 00:25:09.596 Removing: /var/run/dpdk/spdk_pid70749 00:25:09.596 Removing: /var/run/dpdk/spdk_pid70889 00:25:09.596 Removing: /var/run/dpdk/spdk_pid72374 00:25:09.596 Removing: /var/run/dpdk/spdk_pid72634 00:25:09.596 Removing: /var/run/dpdk/spdk_pid72780 00:25:09.596 Removing: /var/run/dpdk/spdk_pid74260 00:25:09.596 Removing: /var/run/dpdk/spdk_pid74747 00:25:09.596 Removing: /var/run/dpdk/spdk_pid74893 00:25:09.596 Removing: /var/run/dpdk/spdk_pid75031 00:25:09.596 Removing: /var/run/dpdk/spdk_pid75480 00:25:09.596 Removing: /var/run/dpdk/spdk_pid76232 00:25:09.596 Removing: /var/run/dpdk/spdk_pid76623 00:25:09.596 Removing: /var/run/dpdk/spdk_pid77324 00:25:09.596 Removing: /var/run/dpdk/spdk_pid77787 00:25:09.596 Removing: /var/run/dpdk/spdk_pid78552 00:25:09.596 Removing: /var/run/dpdk/spdk_pid78966 00:25:09.596 Removing: /var/run/dpdk/spdk_pid80930 00:25:09.596 Removing: /var/run/dpdk/spdk_pid81374 00:25:09.596 Removing: /var/run/dpdk/spdk_pid81814 00:25:09.596 Removing: /var/run/dpdk/spdk_pid83915 00:25:09.596 Removing: /var/run/dpdk/spdk_pid84405 00:25:09.596 Removing: /var/run/dpdk/spdk_pid84926 00:25:09.596 Removing: /var/run/dpdk/spdk_pid85990 00:25:09.596 Removing: /var/run/dpdk/spdk_pid86314 00:25:09.596 Removing: /var/run/dpdk/spdk_pid87251 00:25:09.596 Removing: /var/run/dpdk/spdk_pid87575 00:25:09.596 Removing: /var/run/dpdk/spdk_pid88512 00:25:09.596 Removing: /var/run/dpdk/spdk_pid88835 00:25:09.596 Removing: /var/run/dpdk/spdk_pid89508 00:25:09.596 Removing: /var/run/dpdk/spdk_pid89788 00:25:09.596 Removing: /var/run/dpdk/spdk_pid89855 00:25:09.596 Removing: /var/run/dpdk/spdk_pid89903 00:25:09.596 Removing: /var/run/dpdk/spdk_pid90149 00:25:09.596 Removing: /var/run/dpdk/spdk_pid90328 00:25:09.596 Removing: /var/run/dpdk/spdk_pid90425 00:25:09.596 Removing: /var/run/dpdk/spdk_pid90526 00:25:09.596 Removing: /var/run/dpdk/spdk_pid90579 00:25:09.596 Removing: /var/run/dpdk/spdk_pid90610 00:25:09.596 Clean 00:25:09.596 13:45:09 -- common/autotest_common.sh@1453 -- # return 0 00:25:09.596 13:45:09 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:25:09.596 13:45:09 -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:09.596 13:45:09 -- common/autotest_common.sh@10 -- # set +x 00:25:09.854 13:45:09 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:25:09.854 13:45:09 -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:09.854 13:45:09 -- common/autotest_common.sh@10 -- # set +x 00:25:09.854 13:45:09 -- spdk/autotest.sh@392 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:25:09.854 13:45:09 -- spdk/autotest.sh@394 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:25:09.854 13:45:09 -- spdk/autotest.sh@394 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:25:09.854 13:45:09 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:25:09.854 13:45:09 -- spdk/autotest.sh@398 -- # hostname 00:25:09.855 13:45:09 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:25:10.112 geninfo: WARNING: invalid characters removed from testname! 00:25:36.650 13:45:34 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:25:39.176 13:45:38 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:25:41.075 13:45:40 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:25:43.607 13:45:42 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:25:45.508 13:45:44 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:25:47.437 13:45:46 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:25:49.984 13:45:49 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:25:49.984 13:45:49 -- spdk/autorun.sh@1 -- $ timing_finish 00:25:49.984 13:45:49 -- common/autotest_common.sh@738 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:25:49.984 13:45:49 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:25:49.984 13:45:49 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:25:49.984 13:45:49 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:25:49.984 + [[ -n 5209 ]] 00:25:49.984 + sudo kill 5209 00:25:49.992 [Pipeline] } 00:25:50.005 [Pipeline] // timeout 00:25:50.010 [Pipeline] } 00:25:50.021 [Pipeline] // stage 00:25:50.024 [Pipeline] } 00:25:50.033 [Pipeline] // catchError 00:25:50.039 [Pipeline] stage 00:25:50.041 [Pipeline] { (Stop VM) 00:25:50.048 [Pipeline] sh 00:25:50.324 + vagrant halt 00:25:53.614 ==> default: Halting domain... 00:26:00.239 [Pipeline] sh 00:26:00.519 + vagrant destroy -f 00:26:03.829 ==> default: Removing domain... 00:26:03.842 [Pipeline] sh 00:26:04.121 + mv output /var/jenkins/workspace/raid-vg-autotest/output 00:26:04.129 [Pipeline] } 00:26:04.143 [Pipeline] // stage 00:26:04.146 [Pipeline] } 00:26:04.157 [Pipeline] // dir 00:26:04.161 [Pipeline] } 00:26:04.175 [Pipeline] // wrap 00:26:04.184 [Pipeline] } 00:26:04.194 [Pipeline] // catchError 00:26:04.203 [Pipeline] stage 00:26:04.205 [Pipeline] { (Epilogue) 00:26:04.218 [Pipeline] sh 00:26:04.506 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:26:11.085 [Pipeline] catchError 00:26:11.086 [Pipeline] { 00:26:11.112 [Pipeline] sh 00:26:11.432 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:26:11.692 Artifacts sizes are good 00:26:11.701 [Pipeline] } 00:26:11.718 [Pipeline] // catchError 00:26:11.732 [Pipeline] archiveArtifacts 00:26:11.740 Archiving artifacts 00:26:11.835 [Pipeline] cleanWs 00:26:11.848 [WS-CLEANUP] Deleting project workspace... 00:26:11.848 [WS-CLEANUP] Deferred wipeout is used... 00:26:11.856 [WS-CLEANUP] done 00:26:11.858 [Pipeline] } 00:26:11.874 [Pipeline] // stage 00:26:11.880 [Pipeline] } 00:26:11.895 [Pipeline] // node 00:26:11.900 [Pipeline] End of Pipeline 00:26:11.936 Finished: SUCCESS